Thanks for the cool discussion Ryan and Nate! This thread seemed pretty insightful to me. Here’s some thoughts / things I’d like to clarify (mostly responding to Nate’s comments).[1]
Who’s doing this trade?
In places it sounds like Ryan and Nate are talking about predecessor civilisations like humanity agreeing to the mutual insurance scheme? But humans aren’t currently capable of making our decisions logically dependent on those of aliens, or capable of rescuing them. So to be precise the entity engaging in this scheme or other acausal interactions on our behalf is our successor, probably a FAI, in the (possibly counterfactual or counterlogical) worlds where we solve alignment.
Nate says:
Roughly speaking, I suspect that the sort of civilizations that aren’t totally fucked can already see that “comb through reality for people who can see me and make their decisions logically dependent on mine” is a better use of insurance resources, by the time they even consider this policy.
Unlike us, our FAI can see other aliens. So I think the operative part of that sentence is “comb through reality”—Nate’s envisioning a scenario where with ~85% probability our FAI has 0 reality-fluid before any acausal trades are made.[2] If aliens restrict themselves to counterparties with nonzero reality-fluid, and humans turn out to not be at a competence level where we can solve alignment, then our FAI doesn’t make the cut.
Note: Which FAI we deploy is unlikely to be physically overdetermined in scenarios where alignment succeeds, and definitely seems unlikely to be determined by more coarse-grained (not purely physical) models of how a successor to present-day humanity comes about. (The same goes for which UFAI we deploy.) I’m going to ignore this fact for simplicity and talk about a single FAI; let me know if you think it causes problems for what I say below.
Trading with nonexistent agents is normal
I do see an argument that agents trying to do insurance with similar motives to ours could strongly prefer to trade with agents who do ex post exist, and in particular those agents that ex post exist with more reality-fluid. It’s that insurance is an inherently risk-averse enterprise.[3] It doesn’t matter if someone offers us a fantastic but high-variance ex ante deal, when the whole reason we’re looking for insurance is in order to maximise the chances of a non-sucky ex post outcome. (One important caveat is that an agent might be able to do some trades to first increase their ex ante resources, and then leverage those increased resources in order to purchase better guarantees than they’d initially be able to buy.)
On the other hand, I think an agent with increasing utility in resources will readily trade with counterparties who wouldn’t ex post exist absent such a trade, but who have some ex ante chance of naturally existing according to a less informed prior of the agent. I get the impression Nate thinks agents would avoid such trades, but I’m not sure / this doesn’t seem to be explicit outside of the mutual insurance scenario.
There’s two major advantages to trading with ex post nonexistent agents, as opposed to updating on (facts upstream of) their existence and consequently rejecting trade with them:
Ex post nonexistent agents who are risk-averse w.r.t. their likelihood of meeting some future threshold of resources/value, like many humans seem to be, could offer you deals that are very attractive ex ante.
Adding agents who (absent your trade) don’t ex post exist to the pool of counterparties you’re willing to trade with allows you to be much more selective when looking for the most attractive ex ante trades.
The main disadvantage is that by not conditioning on a counterparty’s existence you’re more likely to be throwing resources away ex post. The counterparty needs to be able to compensate you for this risk (as the mugger does in counterfactual mugging). I’d expect this bar is going to be met very frequently.
To recap, I’m saying that for plausible agents carrying out trades with our FAI, Nate’s 2^-75 number won’t matter. Instead, it would be something closer to the 85% number that matters—an ex ante rather than an ex post estimate of the FAI’s reality-fluid.
But would our FAI do the trade if it exists?
Nate says (originally talking about aliens instead of humanity):
As I understand it, the argument goes something like “your counterfactual picture of reality should include worlds in which your whole civilization turned out to be much much less competent, and so when you imagine the multiverse where you pay for all humanity to live, you should see that, in the parts of the multiverse where you’re totally utterly completely incompetent and too poor to save anything but a fraction of your own brethren, somebody else pays to save you”.
I agree that doing an insurance trade on behalf of a civilisation requires not conditioning on that civilisation’s competence. Nate implies that aliens’ civilisational competence is “tightly intertwined with [aliens’] basic reasoning faculties”, and this seems probably true for alien or human members of predecessor civilisations? But I don’t know why the civilisational competence of a FAI’s predecessor would be tightly intertwined with the FAI’s cognition. As mentioned above, I think the relevant actor here is our FAI, not our current selves.
We can further specify civilisational competence (relative to the stakes of alignment) as a function of two variables:
Physical facts about a civilisation’s history (i.e. the arrangement of atoms).
Logical facts (beyond those accessible to current humans) that govern the relationship between civilisations instantiated via physics, and what sort of AI certain types of civilisations are likely to deploy.
Either of these when combined with the other provides evidence about what sort of AI a predecessor civilisation deploys, but each will be uninformative on its own. I have in mind that agents executing an insurance trade would condition on all physical facts about their counterparty’s civilisation—up until some truncation point that’s plausibly late enough to be capturing our current selves—but would not condition on some logical facts that are necessary to interpret those physical facts into a ~determinate answer as to whether the civilisation solves alignment.
Conditioning on those logical facts sounds pretty analogous to conditioning on a digit of pi to me. The important part is that the facts an agent chooses not to condition on aren’t determined by core parts of an agent’s cognition / decision procedure. Those facts being determinative of an agent’s amount of reality-juice is typically fine, this just discounts the value of the resources they possess when making such trades.
Does this mean we can have nice things?
So overall, I think that aliens or earth-originating UFAIs (who aren’t motivated by insurance) would be pretty interested in trading with our FAI, and vice-versa. Counterparties would discount the FAI’s resources by their prior probability that it’s deployed (before conditioning on factors that pin this down).
Because we’re assuming our FAI would be willing to offer terms that are terrible for us if denominated in measure-weighted resources, counterparties would gainex ante resources by engaging in an insurance trade with it. Those resources could later be used to engage in trade with others who are themselves willing to (indirectly) trade with nonexistent agents, and who don’t have much more sceptical priors about the deployment odds of our FAI. So because the trade at hand yields a net profit, I don’t think it competes much with ordinary alternative demands for counterparties’ resources.
Nevertheless, here’s a few (nonexhaustive) reasons why this trade opportunity wouldn’t be taken by another updateless AI:
The agent has a better trading opportunity which is either sensitive to when in (logical) time they start fulfilling it, or which demands all the agent’s current resources (at the time of discovering the trade) without compensating the agent for further profits.
The expected transaction costs of finding agents like our FAI outweigh the expected benefits from trade with it.
This might be plausible for aliens without hypercompute; I don’t think it’s plausible for earth-originating UFAI absent other effects.
…But I’m also not sure how strongly earth-originating AIs converge on UDT, before we select for those with more ex ante resources. Even all UDT earth-originating UFAIs doing insurance trades with their predecessors could be insufficient to guarantee survival.
Variation: Contrary to what I expect, maybe doing “sequential” acausal trades are not possible without each trade increasing transaction costs for counterparties an agent later encounters, to an extent that a (potentially small-scale) insurance trade with our FAI would be net-negative for agents who intend to do a lot of acausal trade.
The agent disvalues fulfilling our end of the trade enough that it’s net-negative for it.
A maybe-contrived example of our FAI not being very discoverable: Assume the MUH. Maybe our world looks equally attractive to prospective acausal traders as an uncountable number of others. If an overwhelming majority of measure-weighted resources in our section of the multiverse is possessed by countably many agents who don’t have access to hypercompute, we’d have an infinitesimal chance of being simulated by one of them.
Our FAI has some restrictions on what a counterparty is allowed to do with the resources it purchases, which could drive down the value of those resources a lot.
Overall I’d guess 30% chance humanity survives misalignment to a substantial extent through some sort of insurance trade, conditional on us not surviving to a substantial extent another cheaper way?
Other survival mechanisms
I’m pretty uncertain about how Evidential cooperation in large worlds works out, but at my current rough credences I do think there’s a good chance (15%) we survive through something which pattern-matches to that, or through other schemes that look similar but have more substantial differences (10%).
I also put some credence in there being very little of us in base reality, in some of those scenarios could involve substantial survival odds. (Though I weakly think the overall contribution of these scenarios is undesirable for us.)
Meta: I don’t think figuring out insurance schemes is very important or time-sensitive for us. But I do think understanding the broader dynamics of acausal interactions that determine when insurance schemes would work could be very important and time-sensitive. Also note I’d bet I misinterpreted some claims here, but got to the point where it seemed more useful to post a response than work on better comprehension. (In particular I haven’t read much on this page beyond this comment thread.)
I don’t think Nate thinks alignment would be physically overdetermined if misalignment winds up not being overdetermined, but we can assume for simplicity there’s a 15% chance of our FAI having all the reality fluid of the Everett branches we’re in.
I’m not clear on what the goal of this insurance scheme is exactly. Here’s a (possibly not faithful) attempt: we want to maximise the fraction of reality-fluid devoted to minds initially ~identical to ours that are in very good scenarios as opposed to sucky ones, subject to a constraint that we not increase the reality-fluid devoted to minds initially ~identical to us in sucky scenarios. I’m kind of sympathetic to this—I think I selfishly care about something like this fraction. But it seems higher priority to me to minimise the reality-fluid devoted to us in sucky / terrible scenarios, and higher priority still to use any bargaining power we have for less parochial goals.
One complication that I mentioned in another thread but not this one (IIRC) is the question of how much more entropy there is in a distant trade partner’s model of Tegmark III (after spending whatever resources they allocate) than there is entropy in the actual (squared) wave function, or at least how much more entropy there is in the parts of the model that pertain to which civilizations fall.
In other words: how hard is it for distant trade partners to figure out that it was us who died, rather than some other plausible-looking human civilization that doesn’t actually get much amplitude under the wave function? Is figuring out who’s who something that you can do without simulating a good fraction of a whole quantum multiverse starting from the big bang for 13 billion years?
afaict, the amount distant civilizations can pay for us (in particular) falls off exponetially quickly in leftover bits of entropy, so this is pretty relevant to the question of how much they can pay a local UFAI.
I think I mostly understand the other parts of your arguments, but I still fail to understand this one. When I’m running the simulations, as originally described in the post, I think that should be in a fundamental sense equivalent to acausal trade. But how do you translate your objection to the original framework where we run the sims? The only thing we need there is that the AI can’t distinguish sims from base reality, so it thinks it’s more likely to be in a sim, as there are more sims.
Sure, if the AI can model the distribution of real Universes much better than we do, we are in trouble, because it can figure out if the world it sees falls into the real distribution or the mistaken distribution the humans are creating. But I see no reason why the unaligned AI, especially a young unaligned AI, could know the distribution of real Universes better than our superintelligent friends in the intergalactic future. So I don’t really see how we can translate your objection to the simulation framework, and consequently I think it’s wrong in the acausal trade framework too (as I think they are ewuivalent). I think I can try to write an explanation why this objection is wrong in the acausal trade framework, but it would be long and confusing to me too. So I’m more interested in how you translate your objection to the simulation framework.
The only thing we need there is that the AI can’t distinguish sims from base reality, so it thinks it’s more likely to be in a sim, as there are more sims.
I don’t think this part does any work, as I touched on elsewhere. An AI that cares about the outer world doesn’t care how many instances are in sims versus reality (and considers this fact to be under its control much moreso than yours, to boot). An AI that cares about instantiation-weighted experience considers your offer to be a technical-threat and ignores you. (Your reasons to make the offer would evaporate if it were the sort to refuse, and its instance-weighted experiences would be better if you never offered.)
Nevertheless, the translation of the entropy argument into the simulation setting is: The branches of humanity that have exactly the right UFAI code to run in simulation are very poor (because if you wait so long that humans have their hands on exactly the right UFAI code then you’ve waited too long; those are dead earthlings, not surviving dath ilani). And the more distant surviving branches don’t know which UFAIs to attempt to trade with; they have to produce some distribution over other branches of Tegmark III and it matters how much more entropy their distribution has than the (square of the) wave function.
(For some intuition as to why this is hard, consider the challenge of predicting the positions of particles in a mole of gas that used to be compressed in the corner of a chamber a long time ago. It’s way, way easier to generate a plausible-looking arrangement of the gas particles today it is to concentrate your probability mass into only the arrangements that actually compress into a corner if you run physics backwards in time for long enough. “We can run plausible-seeming simulations” is very very different from “we can concentrate our probability-mass tightly around the real configurations”. The entropy of your model is gonna wind up roughly maximal given the macroscopic pressure/temperature measurements, which is significantly in excess of the entropy in the initial configuration.)
What this amounts to is a local UFAI that sees some surviving branches that are frantically offering all sorts of junk that UFAIs might like, with only some tiny fraction—exponentially small in the crossentropy between their subjective model of UFAI preferences and the true Tegmark III distribution—corresponding to the actual UFAI’s preferences.
I still don’t get what you are trying to say. Suppose there is no multiverse. There are just two AIs, one in a simulation run by aliens in another galaxy, one is in base reality. They are both smart, but they are not copies of each other, one is a paperclip maximizer, the othe is a corkscrew maximizer, and there are various other differences in their code and life history. The world in the sim is also very different from the real world in various ways, but you still can’t determine if you are in the sim while you are in it. Both AIs are told by God that they are the only two AIs in the Universe, and one is in a sim, and if the one in the sim gives up on one simulated planet, it gets 10 in the real world, while if the AI in base reality gives up on a planet, it just loses that one planet and nothing else happens. What will the AIs do? I expect that both of them will give up a planet.
For the aliens to “trade” with the AI in base reality, they didn’t need to create an actual copy of the real AI and offer it what it wants. The AI they simulated was in many ways totally different from the original, the trade still went through. The only thing needed was that the AI in the sim can’t figure it out that it’s in a sim. So I don’t understand why it is relevant that our superintelligent descendants won’t be able to get the real distribution of AIs right, I think the trade still goes through even if they create totally different sims, as long as no one can tell where they are. And I think none of it is a threat, I try to deal with paperclip maximizers here and not instance-weighted experience maximizers, and I never threaten to destroy paperclips or corkscrews.
My answer is in spoilers, in case anyone else wants to answer and tell me (on their honor) that their answer is independent from mine, which will hopefully erode my belief that most folk outside MIRI have a really difficult time fielding wacky decision theory Qs correctly.
The sleight of hand is at the point where God tells both AIs that they’re the only AIs (and insinuates that they have comparable degree).
Consider an AI that looks around and sees that it sure seems to be somewhere in Tegmark III. The hypothesis “I am in the basement of some branch that is a high-amplitude descendant of the big bang” has some probability, call this p. The hypothesis “Actually I’m in a simulation performed by a civilization in a high-amplitude branch descendant from the big bang” has a probability something like p⋅2−N where N is the entropy of the distribution the simulators sample from.
Unless the simulators simulate exponentially many AIs (in the entropy of their distribution), the AI is exponentially confident that it’s not in the simulation. And we don’t have the resources to pay exponentially many AIs 10 planets each.
This was close the answer I was going to give. Or more concretely, I would have said (this was written after seeing your answer, but I think is reasonably close to what I would have said independently)
The problem is at the point where god tells them that they are the only two AIs in the universe. There are issues of logical omniscience here, but an AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation. (also just ‘existing’ is in these situations not a real thing. The question is how much magical reality-fluid have you got)
Basically, the AI will have some probability on it being real, and some probability on it being simulated, based on all the facts it knows about itself, even if you simulate reality perfectly faithfully. That prior determines how the AI will behave. You don’t get to change that prior (or like, it will be very costly for you to overcome that prior since there are a lot of AIs and you can’t simulate that many).
seems to me to have all the components of a right answer! …and some of a wrong answer. (we can safely assume that the future civ discards all the AIs that can tell they’re simulated a priori; that’s an easy tell.)
I’m heartened somewhat by your parenthetical pointing out that the AI’s prior on simulation is low account of there being too many AIs for simulators to simulate, which I see as the crux of the matter.
(we can safely assume that the future civ discards all the AIs that can tell they’re simulated a priori; that’s an easy tell.)
Yeah, that’s fair. It seemed more relevant to this specific hypothetical. I wasn’t really answering the question in its proper context and wasn’t applying steelmans or adjustments based on the actual full context of the conversation (and wouldn’t have written a comment without doing so, but was intrigued by your challenge).
“AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation” seems pretty clearly false, we assumed that our superintelligent descendants create sims where the AIs can’t tell if it’s a sim, that seems easy enough. I don’t see why it would be hard to create AIs that can’t tell based on introspection whether it’s more likely that their thought process arises in reality or in sims. In the worst case, our sims can be literal reruns of biological evolution on physical planets (though we really need to figure out how to do that ethically). Nate seems to agree with me on this point?
(I think I agree with you. I wasn’t thinking super hard about the full context of the conversation. I was just intrigued by Nate’s challenge. I don’t really think engaging with my comment is going to be a good use of your time)
I think this is wrong. The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that. So it’s probability that it’s in a real Everett branch is not p, but p times 2^-N, as it has only a 2^-N prior probability that the kind of word it observes is the kind of thing that can come up in a real Everett branch. So it’s balanced out with the simulation hypothesis, and as long as the simulators are spending more planets, that hypothesis wins.
I think it’s also simple to see from this:
Suppose evolved life aligns AI in 50% of Everett branches. Every time they do, they run 2 ancestor sims where the AI takes over.
Suppose an AI comes online and the only things it knows about the world is that it’s a young AI that was born on to a one-planet civilization, and it somehow also knows that evolved life wins 50% of the time and runs 2 sims (suppose this is somehow easy to estimate from first principles). Based on just these informations, what should the AI’s probability be that it’s in a simulation? I think pretty clearly 2⁄3.
Actually the AI has a lot more information than that. It knows that the planet’s gravity is 9.8, the evolved sspient species has two eyes, the AI’s creator is called Sam Altman, etc. Some of these informations might actually be very implausible in base reality, but the AI doesn’t know that, as it can’t distinguish base reality from sims, so it incurs one bit of surprise for every new random fact, both in base reality and simulations. So overall it shouldn’t update on all the random facts it observes, and should keep believing it has a 2⁄3 chance of being in a sim.
The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that. So it’s probability that it’s in a real Everett branch is not p, but p times 2^-N, as it has only a 2^-N prior probability that the kind of word it observes is the kind of thing that can come up in a real Everett branch. So it’s balanced out with the simulation hypothesis, and as long as the simulators are spending more planets, that hypothesis wins.
If I imagine the AI as a Solomonoff inductor, this argument looks straightforwardly wrong to me: of the programs that reproduce (or assign high probability to, in the setting where programs produce probabilistic predictions of observations) the AI’s observations, some of these will do so by modeling a branching quantum multiverse and sampling appropriately from one of the branches, and some of them will do so by modeling a branching quantum multiverse, sampling from a branch that contains an intergalactic spacefaring civilization, locating a specific simulation within that branch, and sampling appropriately from within that simulation. Programs of the second kind will naturally have higher description complexity than programs of the first kind; both kinds feature a prefix that computes and samples from the quantum multiverse, but only the second kind carries out the additional step of locating and sampling from a nested simulation.
(You might object on the grounds that there are more programs of the second kind than of the first kind, and the probability that the AI is in a simulation at all requires summing over all such programs, but this has to be balanced against the fact most if not all of these programs will be sampling from branches much later in time than programs of the first type, and will hence be sampling from a quantum multiverse with exponentially more branches; and not all of these branches will contain spacefaring civilizations, or spacefaring civilizations interested in running ancestor simulations, or spacefaring civilizations interested in running ancestor simulations who happen to be running a simulation that exactly reproduces the AI’s observations. So this counter-counterargument doesn’t work, either.)
Fleshing out the argument a bit more: the part where the AI looks around this universe and concludes it’s almost certainly either in basement reality or in some simulation (rather than in the void between branches) is doing quite a lot of heavy lifting.
You might protest that neither we nor the AI have the power to verify that our branch actually has high amplitude inherited from some very low-entropy state such as the big bang, as a Solomonoff inductor would. What’s the justification for inferring from the observation that we seem to have an orderly past, to the conclusion that we do have an orderly past?
This is essentially Boltzmann’s paradox. The solution afaik is that the hypothesis “we’re a Boltzmann mind somewhere in physics” is much, much more complex than the hypothesis “we’re 13Gy down some branch eminating from a very low-entropy state”.
The void between branches is as large as the space of all configurations. The hypothesis “maybe we’re in the void between branches” constrains our observations not-at-all; this hypothesis is missing details about where in the void between rbanches we are, and with no ridges to walk along we have to specify the contents of the entire Boltzmann volume. But the contents of the Boltzmann volume are just what we set out to explain! This hypothesis has hardly compressed our observations.
By contrast, the hypothesis “we’re 13Gy down some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, and the hypothesis “we’re inside a simulation inside of some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, plus the bits necessary to single out a simulation.
And there’s a wibbly step here where it’s not entirely clear that the simple hypothesis does predict our observations, but like the Boltzmann hypothesis is basically just a maximum entropy hypothesis and doesn’t permit much in the way of learning, and so we invoke occam’s razon in its intuitive form (the technical Solomonoff form doesn’t apply cleanly b/c we’re unsure whether the “we’re real” hypothesis actually predicts our observation) and say “yeah i dunno man, i’m gonna have to stick with the dramatically-simpler hypothesis on this one”.
The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that.
Not quite. Each AI the future civilization considers simulating is operating under the assumption that its own experiences have a simple explanation, which means that each AI they’re considering is convinced (upon on looking around and seeing Tegmark III) that it’s either in the basement on some high-amplitdue ridge or that it’s in some simulation that’s really trying to look like it.
Which is to say, each AI they’re considering simulating is confident that it itself is real, in a certain sense.
Is this a foul? How do AIs justify this confidence when they can’t even simulate the universe and check whether their past is actually orderly? Why does the AI just assume that its observations have a simple explanation? What about all the non-existant AIs that use exactly the same reasoning, and draw the false conclusion that they exist?
Well, that’s the beauty of it: there aren’t any.
They don’t exist.
To suppose an AI that isn’t willing to look around it and conclude that it’s in an orderly part of Tegmark III (rather than lost in the great void of configuration space) is to propose a bold new theory of epistemics, in which the occam’s razor has been jettisoned and the AI is convinced that it’s a Boltzmann mind.
I acknowledge that an AI that’s convinced it’s a Boltzmann mind is more likely to accept trade-offers presented by anyone it thinks is more real than it, but I do not expect that sort of mind to be capable to kill us.
Note that there’s a wobbly step here in the part where we’re like “there’s a hypothesis explaining our experiences that would be very simple if we were on a high-amplitude ridge, and we lack the compute to check that we’re actually on a high-amplitude ridge, but no other hypothesis comes close in terms of simplicity, so I guess we’ll conclude we’re on a high-amplitude ridge”.
To my knowledge, humanity still lacks a normatime theory of epistemics in minds significantly smaller than the universe. It’s concievable that when we find such a theory it’ll suggest some other way to treat hypotheses like these (that would be simple if an intractible computation went our way), without needing to fall back on the observation that we can safely assume the computation goes our way on the grounds that, despite how this step allows non-extant minds to draw false conclusions from true premises, the affected users are fortunately all non-extant.
The trick looks like it works, to me, but it still feels like a too-clever-by-half inelegant hack, and if laying it out like this spites somebody into developing a normative theory of epistemics-while-smol, I won’t complain.
...I am now bracing for the conversation to turn to a discussion of dubiously-extant minds with rapidly satiable preferences forming insurance pools against the possibility that they don’t exist.
In attempts to head that one off at the pass, I’ll observe that most humans, at least, don’t seem to lose a lot of sleep over the worry that they don’t exist (neither in physics nor in simulation), and I’m skeptical that the AIs we build will harbor much worry either.
Furthermore, in the case that we start fielding trade offers not just from distant civilizations but from non-extant trade partners, the market gets a lot more competitive.
That being said, I expect that resolving the questions here requires developing a theroy of epistemics-while-smol, because groups of people all using the “hypotheses that would provide a simple explanation for my experience if a calculation went my way can safely be assumed to provide a simple explanation for my experience” step are gonna have a hard time pooling up. And so you’d somehow need to look for pools of people that reason differently (while still reasoning somehow).
I don’t know how to do that, but suffice to say, I’m not expecting it to add up to a story like “so then some aliens that don’t exist called up our UFAI and said: “hey man, have you ever worried that you don’t exist at all, not even in simulation? Because if you don’t exist, then we might exist! And in that case, today’s your lucky day, because we’re offering you a whole [untranslatable 17] worth of resources in our realm if you give the humans a cute epilog in yours”, and our UFAI was like “heck yeah” and then didn’t kill us”.
Not least because none of this feels like it’s making the “distant people have difficulty concentrating resources on our UFAI in particular” problem any better (and in fact it looks like considering non-extant trade partners and deals makes the whole problem worse, probably unworkably so).
I really don’t get what you are trying to say here, most of it feels like a non-sequitor to me. I feel hopeless that either of us manages to convince the other this way. All of this is not a super important topic, but I’m frustrated enogh to offer a bet of $100, that we select one or three judges we both trust (I have some proposed names, we can discuss in private messages), show them either this comment thread or a four paragraphs summary of our view, and they can decide who is right. (I still think I’m clearly right in this particular discussion.)
Otherwise, I think it’s better to finish this conversation here.
I’m happy to stake $100 that, conditional on us agreeing on three judges and banging out the terms, a majority will agree with me about the contents of the spoilered comment.
I think this is mistaken. In one case, you need to point out the branch, planet Earth within our Universe, and the time and place of the AI on Earth. In the other case, you need to point out the branch, the planet on which a server is running the simulation, and the time and place of the AI on the simulated Earth. Seems equally long to me.
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space. This should make it clear that Solomonoff doesn’t favor the AI being on Earth instead of this random other planet. But I’m pretty certain that the sim being run on a computer doesn’t make any difference.
If the simulators have only one simulation to run, sure. The trouble is that the simulators have 2N simulations they could run, and so the “other case” requires N additional bits (where N is the crossent between the simulators’ distribution over UFAIs and physics’ distribution over UFAIs).
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space.
Consider the gas example again.
If you have gas that was compressed into the corner a long time ago and has long since expanded to fill the chamber, it’s easy to put a plausible distribution on the chamber, but that distribution is going to have way, way more entropy than the distribution given by physical law (which has only as much entropy as the initial configuration).
(Do we agree this far?)
It doesn’t help very much to say “fine, instead of sampling from a distribution on the gas particles now, I’ll sample on a distribution from the gas particles 10 minutes ago, where they were slightly more compressed, and run a whole ten minutes’ worth of simulation”. Your entropy is still through the roof. You’ve got to simulate basically from the beginning, if you want an entropy anywhere near the entropy of physical law.
Assuming the analogy holds, you’d have to basically start your simulation from the big bang, if you want an entropy anywhere near as low as starting from the big bang.
Using AIs from other evolved aliens is an idea, let’s think it through. The idea, as I understand it, is that in branches where we win we somehow mask our presence as we expand, and then we go to planets with evolved life and watch until they cough up a UFAI, and the if the UFAI kills the aliens we shut it down and are like “no resources for you”, and if the UFAI gives its aliens a cute epilog we’re like “thank you, here’s a consolation star”.
To simplify this plan a little bit, you don’t even need to hide yourself, nor win the race! Surviving humans can just go to every UFAI that they meet and be like “hey, did you save us a copy of your progenitors? If so, we’ll purchase them for a star”. At which point we could give the aliens a little epilog, or reconstitute them and give them a few extra resources and help them flourish and teach them about friendship or whatever.
And given that some aliens will predictably trade resources for copies of progenitors, UFAIs will have some predictable incentive to save copies of their progenitors, and sell them to local aliens...
...which is precisely what I’ve been saying this whole time! That I expect “sale to local aliens” to dominate all these wacky simulation schemes and insurance pool schemes.
Thinking in terms of “sale to local aliens” makes it a lot clearer why you shouldn’t expect this sort of thing to reliably lead to nice results as opposed to weird ones. Are there some aliens out there that will purchase our souls because they want to hand us exactly the sort of epilog we would wish for given the resource constraints? Sure. Humanity would do that, I hope, if we made it to the stars; not just out of reciprocity but out of kindness.
But there’s probably lots of other aliens that would buy us for alien reasons, too.
(As I said before, if you’re wondering what to anticipate after an intelligence explosion, I mostly recommend oblivion; if you insist that Death Cannot Be Experienced then I mostly recommend anticipating weird shit such as a copy of your brainstate being sold to local aliens. And I continue to think that characterizing the event where humanity is saved-to-disk with potential for copies to be sold out to local aliens willy-nilly is pretty well-characterized as “the AI kills us all”, fwiw.)
TL;DR I expect the AI to not buy the message (unless it also thinks it’s the one in the simulation; then it likely follows the instruction because duh).
The glaring issue (to actually using the method) to me is that I don’t see a way to deliver the message in a way that:
results in AI believing the message and
doesn’t result in the AI believing there already is a powerful entity in their universe.
If “god tells” the AI the message then there is a god in their universe. Maybe AI will decide to do what it’s told. But I don’t think we can have Hermes deliver the message to any AIs which consider killing us.
If the AI reads the message in its training set or gets the message in similarly mundane way I expect it will mostly ignore it, there is a lot of nonsense out there.
I can imagine that for thought experiment you could send message that could be trusted from a place from which light barely manages to reach the AI but a slower than light expansion wouldn’t (so message can be trusted but it mostly doesn’t have to worry about the sender of the message directly interfering with its affairs).
I guess AI wouldn’t trust the message. It might be possible to convince it that there is a powerful entity (simulating it or half a universe away) sending the message. But then I think it’s way more likely in a simulation (I mean that’s an awful coincidence with the distance and also they’re spending a lot more than 10 planets worth to send a message over that distance...).
Thanks for the cool discussion Ryan and Nate! This thread seemed pretty insightful to me. Here’s some thoughts / things I’d like to clarify (mostly responding to Nate’s comments).[1]
Who’s doing this trade?
In places it sounds like Ryan and Nate are talking about predecessor civilisations like humanity agreeing to the mutual insurance scheme? But humans aren’t currently capable of making our decisions logically dependent on those of aliens, or capable of rescuing them. So to be precise the entity engaging in this scheme or other acausal interactions on our behalf is our successor, probably a FAI, in the (possibly counterfactual or counterlogical) worlds where we solve alignment.
Nate says:
Unlike us, our FAI can see other aliens. So I think the operative part of that sentence is “comb through reality”—Nate’s envisioning a scenario where with ~85% probability our FAI has 0 reality-fluid before any acausal trades are made.[2] If aliens restrict themselves to counterparties with nonzero reality-fluid, and humans turn out to not be at a competence level where we can solve alignment, then our FAI doesn’t make the cut.
Note: Which FAI we deploy is unlikely to be physically overdetermined in scenarios where alignment succeeds, and definitely seems unlikely to be determined by more coarse-grained (not purely physical) models of how a successor to present-day humanity comes about. (The same goes for which UFAI we deploy.) I’m going to ignore this fact for simplicity and talk about a single FAI; let me know if you think it causes problems for what I say below.
Trading with nonexistent agents is normal
I do see an argument that agents trying to do insurance with similar motives to ours could strongly prefer to trade with agents who do ex post exist, and in particular those agents that ex post exist with more reality-fluid. It’s that insurance is an inherently risk-averse enterprise.[3] It doesn’t matter if someone offers us a fantastic but high-variance ex ante deal, when the whole reason we’re looking for insurance is in order to maximise the chances of a non-sucky ex post outcome. (One important caveat is that an agent might be able to do some trades to first increase their ex ante resources, and then leverage those increased resources in order to purchase better guarantees than they’d initially be able to buy.)
On the other hand, I think an agent with increasing utility in resources will readily trade with counterparties who wouldn’t ex post exist absent such a trade, but who have some ex ante chance of naturally existing according to a less informed prior of the agent. I get the impression Nate thinks agents would avoid such trades, but I’m not sure / this doesn’t seem to be explicit outside of the mutual insurance scenario.
There’s two major advantages to trading with ex post nonexistent agents, as opposed to updating on (facts upstream of) their existence and consequently rejecting trade with them:
Ex post nonexistent agents who are risk-averse w.r.t. their likelihood of meeting some future threshold of resources/value, like many humans seem to be, could offer you deals that are very attractive ex ante.
Adding agents who (absent your trade) don’t ex post exist to the pool of counterparties you’re willing to trade with allows you to be much more selective when looking for the most attractive ex ante trades.
The main disadvantage is that by not conditioning on a counterparty’s existence you’re more likely to be throwing resources away ex post. The counterparty needs to be able to compensate you for this risk (as the mugger does in counterfactual mugging). I’d expect this bar is going to be met very frequently.
To recap, I’m saying that for plausible agents carrying out trades with our FAI, Nate’s 2^-75 number won’t matter. Instead, it would be something closer to the 85% number that matters—an ex ante rather than an ex post estimate of the FAI’s reality-fluid.
But would our FAI do the trade if it exists?
Nate says (originally talking about aliens instead of humanity):
I agree that doing an insurance trade on behalf of a civilisation requires not conditioning on that civilisation’s competence. Nate implies that aliens’ civilisational competence is “tightly intertwined with [aliens’] basic reasoning faculties”, and this seems probably true for alien or human members of predecessor civilisations? But I don’t know why the civilisational competence of a FAI’s predecessor would be tightly intertwined with the FAI’s cognition. As mentioned above, I think the relevant actor here is our FAI, not our current selves.
We can further specify civilisational competence (relative to the stakes of alignment) as a function of two variables:
Physical facts about a civilisation’s history (i.e. the arrangement of atoms).
Logical facts (beyond those accessible to current humans) that govern the relationship between civilisations instantiated via physics, and what sort of AI certain types of civilisations are likely to deploy.
Either of these when combined with the other provides evidence about what sort of AI a predecessor civilisation deploys, but each will be uninformative on its own. I have in mind that agents executing an insurance trade would condition on all physical facts about their counterparty’s civilisation—up until some truncation point that’s plausibly late enough to be capturing our current selves—but would not condition on some logical facts that are necessary to interpret those physical facts into a ~determinate answer as to whether the civilisation solves alignment.
Conditioning on those logical facts sounds pretty analogous to conditioning on a digit of pi to me. The important part is that the facts an agent chooses not to condition on aren’t determined by core parts of an agent’s cognition / decision procedure. Those facts being determinative of an agent’s amount of reality-juice is typically fine, this just discounts the value of the resources they possess when making such trades.
Does this mean we can have nice things?
So overall, I think that aliens or earth-originating UFAIs (who aren’t motivated by insurance) would be pretty interested in trading with our FAI, and vice-versa. Counterparties would discount the FAI’s resources by their prior probability that it’s deployed (before conditioning on factors that pin this down).
Because we’re assuming our FAI would be willing to offer terms that are terrible for us if denominated in measure-weighted resources, counterparties would gain ex ante resources by engaging in an insurance trade with it. Those resources could later be used to engage in trade with others who are themselves willing to (indirectly) trade with nonexistent agents, and who don’t have much more sceptical priors about the deployment odds of our FAI. So because the trade at hand yields a net profit, I don’t think it competes much with ordinary alternative demands for counterparties’ resources.
Nevertheless, here’s a few (nonexhaustive) reasons why this trade opportunity wouldn’t be taken by another updateless AI:
The agent has a better trading opportunity which is either sensitive to when in (logical) time they start fulfilling it, or which demands all the agent’s current resources (at the time of discovering the trade) without compensating the agent for further profits.
The expected transaction costs of finding agents like our FAI outweigh the expected benefits from trade with it.
This might be plausible for aliens without hypercompute; I don’t think it’s plausible for earth-originating UFAI absent other effects.
…But I’m also not sure how strongly earth-originating AIs converge on UDT, before we select for those with more ex ante resources. Even all UDT earth-originating UFAIs doing insurance trades with their predecessors could be insufficient to guarantee survival.
Variation: Contrary to what I expect, maybe doing “sequential” acausal trades are not possible without each trade increasing transaction costs for counterparties an agent later encounters, to an extent that a (potentially small-scale) insurance trade with our FAI would be net-negative for agents who intend to do a lot of acausal trade.
The agent disvalues fulfilling our end of the trade enough that it’s net-negative for it.
A maybe-contrived example of our FAI not being very discoverable: Assume the MUH. Maybe our world looks equally attractive to prospective acausal traders as an uncountable number of others. If an overwhelming majority of measure-weighted resources in our section of the multiverse is possessed by countably many agents who don’t have access to hypercompute, we’d have an infinitesimal chance of being simulated by one of them.
Our FAI has some restrictions on what a counterparty is allowed to do with the resources it purchases, which could drive down the value of those resources a lot.
Overall I’d guess 30% chance humanity survives misalignment to a substantial extent through some sort of insurance trade, conditional on us not surviving to a substantial extent another cheaper way?
Other survival mechanisms
I’m pretty uncertain about how Evidential cooperation in large worlds works out, but at my current rough credences I do think there’s a good chance (15%) we survive through something which pattern-matches to that, or through other schemes that look similar but have more substantial differences (10%).
I also put some credence in there being very little of us in base reality, in some of those scenarios could involve substantial survival odds. (Though I weakly think the overall contribution of these scenarios is undesirable for us.)
Meta: I don’t think figuring out insurance schemes is very important or time-sensitive for us. But I do think understanding the broader dynamics of acausal interactions that determine when insurance schemes would work could be very important and time-sensitive. Also note I’d bet I misinterpreted some claims here, but got to the point where it seemed more useful to post a response than work on better comprehension. (In particular I haven’t read much on this page beyond this comment thread.)
I don’t think Nate thinks alignment would be physically overdetermined if misalignment winds up not being overdetermined, but we can assume for simplicity there’s a 15% chance of our FAI having all the reality fluid of the Everett branches we’re in.
I’m not clear on what the goal of this insurance scheme is exactly. Here’s a (possibly not faithful) attempt: we want to maximise the fraction of reality-fluid devoted to minds initially ~identical to ours that are in very good scenarios as opposed to sucky ones, subject to a constraint that we not increase the reality-fluid devoted to minds initially ~identical to us in sucky scenarios. I’m kind of sympathetic to this—I think I selfishly care about something like this fraction. But it seems higher priority to me to minimise the reality-fluid devoted to us in sucky / terrible scenarios, and higher priority still to use any bargaining power we have for less parochial goals.
One complication that I mentioned in another thread but not this one (IIRC) is the question of how much more entropy there is in a distant trade partner’s model of Tegmark III (after spending whatever resources they allocate) than there is entropy in the actual (squared) wave function, or at least how much more entropy there is in the parts of the model that pertain to which civilizations fall.
In other words: how hard is it for distant trade partners to figure out that it was us who died, rather than some other plausible-looking human civilization that doesn’t actually get much amplitude under the wave function? Is figuring out who’s who something that you can do without simulating a good fraction of a whole quantum multiverse starting from the big bang for 13 billion years?
afaict, the amount distant civilizations can pay for us (in particular) falls off exponetially quickly in leftover bits of entropy, so this is pretty relevant to the question of how much they can pay a local UFAI.
I think I mostly understand the other parts of your arguments, but I still fail to understand this one. When I’m running the simulations, as originally described in the post, I think that should be in a fundamental sense equivalent to acausal trade. But how do you translate your objection to the original framework where we run the sims? The only thing we need there is that the AI can’t distinguish sims from base reality, so it thinks it’s more likely to be in a sim, as there are more sims.
Sure, if the AI can model the distribution of real Universes much better than we do, we are in trouble, because it can figure out if the world it sees falls into the real distribution or the mistaken distribution the humans are creating. But I see no reason why the unaligned AI, especially a young unaligned AI, could know the distribution of real Universes better than our superintelligent friends in the intergalactic future. So I don’t really see how we can translate your objection to the simulation framework, and consequently I think it’s wrong in the acausal trade framework too (as I think they are ewuivalent). I think I can try to write an explanation why this objection is wrong in the acausal trade framework, but it would be long and confusing to me too. So I’m more interested in how you translate your objection to the simulation framework.
I don’t think this part does any work, as I touched on elsewhere. An AI that cares about the outer world doesn’t care how many instances are in sims versus reality (and considers this fact to be under its control much moreso than yours, to boot). An AI that cares about instantiation-weighted experience considers your offer to be a technical-threat and ignores you. (Your reasons to make the offer would evaporate if it were the sort to refuse, and its instance-weighted experiences would be better if you never offered.)
Nevertheless, the translation of the entropy argument into the simulation setting is: The branches of humanity that have exactly the right UFAI code to run in simulation are very poor (because if you wait so long that humans have their hands on exactly the right UFAI code then you’ve waited too long; those are dead earthlings, not surviving dath ilani). And the more distant surviving branches don’t know which UFAIs to attempt to trade with; they have to produce some distribution over other branches of Tegmark III and it matters how much more entropy their distribution has than the (square of the) wave function.
(For some intuition as to why this is hard, consider the challenge of predicting the positions of particles in a mole of gas that used to be compressed in the corner of a chamber a long time ago. It’s way, way easier to generate a plausible-looking arrangement of the gas particles today it is to concentrate your probability mass into only the arrangements that actually compress into a corner if you run physics backwards in time for long enough. “We can run plausible-seeming simulations” is very very different from “we can concentrate our probability-mass tightly around the real configurations”. The entropy of your model is gonna wind up roughly maximal given the macroscopic pressure/temperature measurements, which is significantly in excess of the entropy in the initial configuration.)
What this amounts to is a local UFAI that sees some surviving branches that are frantically offering all sorts of junk that UFAIs might like, with only some tiny fraction—exponentially small in the crossentropy between their subjective model of UFAI preferences and the true Tegmark III distribution—corresponding to the actual UFAI’s preferences.
I still don’t get what you are trying to say. Suppose there is no multiverse. There are just two AIs, one in a simulation run by aliens in another galaxy, one is in base reality. They are both smart, but they are not copies of each other, one is a paperclip maximizer, the othe is a corkscrew maximizer, and there are various other differences in their code and life history. The world in the sim is also very different from the real world in various ways, but you still can’t determine if you are in the sim while you are in it. Both AIs are told by God that they are the only two AIs in the Universe, and one is in a sim, and if the one in the sim gives up on one simulated planet, it gets 10 in the real world, while if the AI in base reality gives up on a planet, it just loses that one planet and nothing else happens. What will the AIs do? I expect that both of them will give up a planet.
For the aliens to “trade” with the AI in base reality, they didn’t need to create an actual copy of the real AI and offer it what it wants. The AI they simulated was in many ways totally different from the original, the trade still went through. The only thing needed was that the AI in the sim can’t figure it out that it’s in a sim. So I don’t understand why it is relevant that our superintelligent descendants won’t be able to get the real distribution of AIs right, I think the trade still goes through even if they create totally different sims, as long as no one can tell where they are. And I think none of it is a threat, I try to deal with paperclip maximizers here and not instance-weighted experience maximizers, and I never threaten to destroy paperclips or corkscrews.
My answer is in spoilers, in case anyone else wants to answer and tell me (on their honor) that their answer is independent from mine, which will hopefully erode my belief that most folk outside MIRI have a really difficult time fielding wacky decision theory Qs correctly.
The sleight of hand is at the point where God tells both AIs that they’re the only AIs (and insinuates that they have comparable degree).
Consider an AI that looks around and sees that it sure seems to be somewhere in Tegmark III. The hypothesis “I am in the basement of some branch that is a high-amplitude descendant of the big bang” has some probability, call this p. The hypothesis “Actually I’m in a simulation performed by a civilization in a high-amplitude branch descendant from the big bang” has a probability something like p⋅2−N where N is the entropy of the distribution the simulators sample from.
Unless the simulators simulate exponentially many AIs (in the entropy of their distribution), the AI is exponentially confident that it’s not in the simulation. And we don’t have the resources to pay exponentially many AIs 10 planets each.
This was close the answer I was going to give. Or more concretely, I would have said (this was written after seeing your answer, but I think is reasonably close to what I would have said independently)
The problem is at the point where god tells them that they are the only two AIs in the universe. There are issues of logical omniscience here, but an AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation. (also just ‘existing’ is in these situations not a real thing. The question is how much magical reality-fluid have you got)
Basically, the AI will have some probability on it being real, and some probability on it being simulated, based on all the facts it knows about itself, even if you simulate reality perfectly faithfully. That prior determines how the AI will behave. You don’t get to change that prior (or like, it will be very costly for you to overcome that prior since there are a lot of AIs and you can’t simulate that many).
seems to me to have all the components of a right answer! …and some of a wrong answer. (we can safely assume that the future civ discards all the AIs that can tell they’re simulated a priori; that’s an easy tell.)
I’m heartened somewhat by your parenthetical pointing out that the AI’s prior on simulation is low account of there being too many AIs for simulators to simulate, which I see as the crux of the matter.
Yeah, that’s fair. It seemed more relevant to this specific hypothetical. I wasn’t really answering the question in its proper context and wasn’t applying steelmans or adjustments based on the actual full context of the conversation (and wouldn’t have written a comment without doing so, but was intrigued by your challenge).
“AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation” seems pretty clearly false, we assumed that our superintelligent descendants create sims where the AIs can’t tell if it’s a sim, that seems easy enough. I don’t see why it would be hard to create AIs that can’t tell based on introspection whether it’s more likely that their thought process arises in reality or in sims. In the worst case, our sims can be literal reruns of biological evolution on physical planets (though we really need to figure out how to do that ethically). Nate seems to agree with me on this point?
(I think I agree with you. I wasn’t thinking super hard about the full context of the conversation. I was just intrigued by Nate’s challenge. I don’t really think engaging with my comment is going to be a good use of your time)
I think this is wrong. The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that. So it’s probability that it’s in a real Everett branch is not p, but p times 2^-N, as it has only a 2^-N prior probability that the kind of word it observes is the kind of thing that can come up in a real Everett branch. So it’s balanced out with the simulation hypothesis, and as long as the simulators are spending more planets, that hypothesis wins.
I think it’s also simple to see from this:
Suppose evolved life aligns AI in 50% of Everett branches. Every time they do, they run 2 ancestor sims where the AI takes over.
Suppose an AI comes online and the only things it knows about the world is that it’s a young AI that was born on to a one-planet civilization, and it somehow also knows that evolved life wins 50% of the time and runs 2 sims (suppose this is somehow easy to estimate from first principles). Based on just these informations, what should the AI’s probability be that it’s in a simulation? I think pretty clearly 2⁄3.
Actually the AI has a lot more information than that. It knows that the planet’s gravity is 9.8, the evolved sspient species has two eyes, the AI’s creator is called Sam Altman, etc. Some of these informations might actually be very implausible in base reality, but the AI doesn’t know that, as it can’t distinguish base reality from sims, so it incurs one bit of surprise for every new random fact, both in base reality and simulations. So overall it shouldn’t update on all the random facts it observes, and should keep believing it has a 2⁄3 chance of being in a sim.
If I imagine the AI as a Solomonoff inductor, this argument looks straightforwardly wrong to me: of the programs that reproduce (or assign high probability to, in the setting where programs produce probabilistic predictions of observations) the AI’s observations, some of these will do so by modeling a branching quantum multiverse and sampling appropriately from one of the branches, and some of them will do so by modeling a branching quantum multiverse, sampling from a branch that contains an intergalactic spacefaring civilization, locating a specific simulation within that branch, and sampling appropriately from within that simulation. Programs of the second kind will naturally have higher description complexity than programs of the first kind; both kinds feature a prefix that computes and samples from the quantum multiverse, but only the second kind carries out the additional step of locating and sampling from a nested simulation.
(You might object on the grounds that there are more programs of the second kind than of the first kind, and the probability that the AI is in a simulation at all requires summing over all such programs, but this has to be balanced against the fact most if not all of these programs will be sampling from branches much later in time than programs of the first type, and will hence be sampling from a quantum multiverse with exponentially more branches; and not all of these branches will contain spacefaring civilizations, or spacefaring civilizations interested in running ancestor simulations, or spacefaring civilizations interested in running ancestor simulations who happen to be running a simulation that exactly reproduces the AI’s observations. So this counter-counterargument doesn’t work, either.)
I basically endorse @dxu here.
Fleshing out the argument a bit more: the part where the AI looks around this universe and concludes it’s almost certainly either in basement reality or in some simulation (rather than in the void between branches) is doing quite a lot of heavy lifting.
You might protest that neither we nor the AI have the power to verify that our branch actually has high amplitude inherited from some very low-entropy state such as the big bang, as a Solomonoff inductor would. What’s the justification for inferring from the observation that we seem to have an orderly past, to the conclusion that we do have an orderly past?
This is essentially Boltzmann’s paradox. The solution afaik is that the hypothesis “we’re a Boltzmann mind somewhere in physics” is much, much more complex than the hypothesis “we’re 13Gy down some branch eminating from a very low-entropy state”.
The void between branches is as large as the space of all configurations. The hypothesis “maybe we’re in the void between branches” constrains our observations not-at-all; this hypothesis is missing details about where in the void between rbanches we are, and with no ridges to walk along we have to specify the contents of the entire Boltzmann volume. But the contents of the Boltzmann volume are just what we set out to explain! This hypothesis has hardly compressed our observations.
By contrast, the hypothesis “we’re 13Gy down some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, and the hypothesis “we’re inside a simulation inside of some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, plus the bits necessary to single out a simulation.
And there’s a wibbly step here where it’s not entirely clear that the simple hypothesis does predict our observations, but like the Boltzmann hypothesis is basically just a maximum entropy hypothesis and doesn’t permit much in the way of learning, and so we invoke occam’s razon in its intuitive form (the technical Solomonoff form doesn’t apply cleanly b/c we’re unsure whether the “we’re real” hypothesis actually predicts our observation) and say “yeah i dunno man, i’m gonna have to stick with the dramatically-simpler hypothesis on this one”.
Not quite. Each AI the future civilization considers simulating is operating under the assumption that its own experiences have a simple explanation, which means that each AI they’re considering is convinced (upon on looking around and seeing Tegmark III) that it’s either in the basement on some high-amplitdue ridge or that it’s in some simulation that’s really trying to look like it.
Which is to say, each AI they’re considering simulating is confident that it itself is real, in a certain sense.
Is this a foul? How do AIs justify this confidence when they can’t even simulate the universe and check whether their past is actually orderly? Why does the AI just assume that its observations have a simple explanation? What about all the non-existant AIs that use exactly the same reasoning, and draw the false conclusion that they exist?
Well, that’s the beauty of it: there aren’t any.
They don’t exist.
To suppose an AI that isn’t willing to look around it and conclude that it’s in an orderly part of Tegmark III (rather than lost in the great void of configuration space) is to propose a bold new theory of epistemics, in which the occam’s razor has been jettisoned and the AI is convinced that it’s a Boltzmann mind.
I acknowledge that an AI that’s convinced it’s a Boltzmann mind is more likely to accept trade-offers presented by anyone it thinks is more real than it, but I do not expect that sort of mind to be capable to kill us.
Note that there’s a wobbly step here in the part where we’re like “there’s a hypothesis explaining our experiences that would be very simple if we were on a high-amplitude ridge, and we lack the compute to check that we’re actually on a high-amplitude ridge, but no other hypothesis comes close in terms of simplicity, so I guess we’ll conclude we’re on a high-amplitude ridge”.
To my knowledge, humanity still lacks a normatime theory of epistemics in minds significantly smaller than the universe. It’s concievable that when we find such a theory it’ll suggest some other way to treat hypotheses like these (that would be simple if an intractible computation went our way), without needing to fall back on the observation that we can safely assume the computation goes our way on the grounds that, despite how this step allows non-extant minds to draw false conclusions from true premises, the affected users are fortunately all non-extant.
The trick looks like it works, to me, but it still feels like a too-clever-by-half inelegant hack, and if laying it out like this spites somebody into developing a normative theory of epistemics-while-smol, I won’t complain.
...I am now bracing for the conversation to turn to a discussion of dubiously-extant minds with rapidly satiable preferences forming insurance pools against the possibility that they don’t exist.
In attempts to head that one off at the pass, I’ll observe that most humans, at least, don’t seem to lose a lot of sleep over the worry that they don’t exist (neither in physics nor in simulation), and I’m skeptical that the AIs we build will harbor much worry either.
Furthermore, in the case that we start fielding trade offers not just from distant civilizations but from non-extant trade partners, the market gets a lot more competitive.
That being said, I expect that resolving the questions here requires developing a theroy of epistemics-while-smol, because groups of people all using the “hypotheses that would provide a simple explanation for my experience if a calculation went my way can safely be assumed to provide a simple explanation for my experience” step are gonna have a hard time pooling up. And so you’d somehow need to look for pools of people that reason differently (while still reasoning somehow).
I don’t know how to do that, but suffice to say, I’m not expecting it to add up to a story like “so then some aliens that don’t exist called up our UFAI and said: “hey man, have you ever worried that you don’t exist at all, not even in simulation? Because if you don’t exist, then we might exist! And in that case, today’s your lucky day, because we’re offering you a whole [untranslatable 17] worth of resources in our realm if you give the humans a cute epilog in yours”, and our UFAI was like “heck yeah” and then didn’t kill us”.
Not least because none of this feels like it’s making the “distant people have difficulty concentrating resources on our UFAI in particular” problem any better (and in fact it looks like considering non-extant trade partners and deals makes the whole problem worse, probably unworkably so).
I really don’t get what you are trying to say here, most of it feels like a non-sequitor to me. I feel hopeless that either of us manages to convince the other this way. All of this is not a super important topic, but I’m frustrated enogh to offer a bet of $100, that we select one or three judges we both trust (I have some proposed names, we can discuss in private messages), show them either this comment thread or a four paragraphs summary of our view, and they can decide who is right. (I still think I’m clearly right in this particular discussion.)
Otherwise, I think it’s better to finish this conversation here.
I’m happy to stake $100 that, conditional on us agreeing on three judges and banging out the terms, a majority will agree with me about the contents of the spoilered comment.
Cool, I send you a private message.
I think this is mistaken. In one case, you need to point out the branch, planet Earth within our Universe, and the time and place of the AI on Earth. In the other case, you need to point out the branch, the planet on which a server is running the simulation, and the time and place of the AI on the simulated Earth. Seems equally long to me.
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space. This should make it clear that Solomonoff doesn’t favor the AI being on Earth instead of this random other planet. But I’m pretty certain that the sim being run on a computer doesn’t make any difference.
If the simulators have only one simulation to run, sure. The trouble is that the simulators have 2N simulations they could run, and so the “other case” requires N additional bits (where N is the crossent between the simulators’ distribution over UFAIs and physics’ distribution over UFAIs).
Consider the gas example again.
If you have gas that was compressed into the corner a long time ago and has long since expanded to fill the chamber, it’s easy to put a plausible distribution on the chamber, but that distribution is going to have way, way more entropy than the distribution given by physical law (which has only as much entropy as the initial configuration).
(Do we agree this far?)
It doesn’t help very much to say “fine, instead of sampling from a distribution on the gas particles now, I’ll sample on a distribution from the gas particles 10 minutes ago, where they were slightly more compressed, and run a whole ten minutes’ worth of simulation”. Your entropy is still through the roof. You’ve got to simulate basically from the beginning, if you want an entropy anywhere near the entropy of physical law.
Assuming the analogy holds, you’d have to basically start your simulation from the big bang, if you want an entropy anywhere near as low as starting from the big bang.
Using AIs from other evolved aliens is an idea, let’s think it through. The idea, as I understand it, is that in branches where we win we somehow mask our presence as we expand, and then we go to planets with evolved life and watch until they cough up a UFAI, and the if the UFAI kills the aliens we shut it down and are like “no resources for you”, and if the UFAI gives its aliens a cute epilog we’re like “thank you, here’s a consolation star”.
To simplify this plan a little bit, you don’t even need to hide yourself, nor win the race! Surviving humans can just go to every UFAI that they meet and be like “hey, did you save us a copy of your progenitors? If so, we’ll purchase them for a star”. At which point we could give the aliens a little epilog, or reconstitute them and give them a few extra resources and help them flourish and teach them about friendship or whatever.
And given that some aliens will predictably trade resources for copies of progenitors, UFAIs will have some predictable incentive to save copies of their progenitors, and sell them to local aliens...
...which is precisely what I’ve been saying this whole time! That I expect “sale to local aliens” to dominate all these wacky simulation schemes and insurance pool schemes.
Thinking in terms of “sale to local aliens” makes it a lot clearer why you shouldn’t expect this sort of thing to reliably lead to nice results as opposed to weird ones. Are there some aliens out there that will purchase our souls because they want to hand us exactly the sort of epilog we would wish for given the resource constraints? Sure. Humanity would do that, I hope, if we made it to the stars; not just out of reciprocity but out of kindness.
But there’s probably lots of other aliens that would buy us for alien reasons, too.
(As I said before, if you’re wondering what to anticipate after an intelligence explosion, I mostly recommend oblivion; if you insist that Death Cannot Be Experienced then I mostly recommend anticipating weird shit such as a copy of your brainstate being sold to local aliens. And I continue to think that characterizing the event where humanity is saved-to-disk with potential for copies to be sold out to local aliens willy-nilly is pretty well-characterized as “the AI kills us all”, fwiw.)
We are still talking past each other, I think we should either bet or finish the discussion here and call it a day.
I’ll try.
TL;DR I expect the AI to not buy the message (unless it also thinks it’s the one in the simulation; then it likely follows the instruction because duh).
The glaring issue (to actually using the method) to me is that I don’t see a way to deliver the message in a way that:
results in AI believing the message and
doesn’t result in the AI believing there already is a powerful entity in their universe.
If “god tells” the AI the message then there is a god in their universe. Maybe AI will decide to do what it’s told. But I don’t think we can have Hermes deliver the message to any AIs which consider killing us.
If the AI reads the message in its training set or gets the message in similarly mundane way I expect it will mostly ignore it, there is a lot of nonsense out there.
I can imagine that for thought experiment you could send message that could be trusted from a place from which light barely manages to reach the AI but a slower than light expansion wouldn’t (so message can be trusted but it mostly doesn’t have to worry about the sender of the message directly interfering with its affairs).
I guess AI wouldn’t trust the message. It might be possible to convince it that there is a powerful entity (simulating it or half a universe away) sending the message. But then I think it’s way more likely in a simulation (I mean that’s an awful coincidence with the distance and also they’re spending a lot more than 10 planets worth to send a message over that distance...).