Background: I think there’s a common local misconception of logical decision theory that it has something to do with making “commitments” including while you “lack knowledge”. That’s not my view.
I pay the driver in Parfit’s hitchhiker not because I “committed to do so”, but because when I’m standing at the ATM and imagine not paying, I imagine dying in the desert. Because that’s what my counterfactuals say to imagine. To someone with a more broken method of evaluating counterfactuals, I might pseudo-justify my reasoning by saying “I am acting as you would have committed to act”. But I am not acting as I would have committed to act; I do not need a commitment mechanism; my counterfactuals just do the job properly no matter when or where I run them.
To be clear: I think there are probably competent civilizations out there who, after ascending, will carefully consider the places where their history could have been derailed, and carefully comb through the multiverse for entities that would be able to save those branches, and will pay thoes entities, not because they “made a commitment”, but because their counterfactuals don’t come with little labels saying “this branch is the real branch”. The multiverse they visualize in which the (thick) survivor branches pay a little to the (thin) derailed branches (leading to a world where everyone lives (albeit a bit poorer)), seems better to them than the multiverse they visualize in which no payments are made (and the derailed branches die, and the on-track branches are a bit richer), and so they pay.
There’s a question of what those competent civilizations think when they look at us, who are sitting here yelling “we can’t see you, and we don’t know how to condition our actions on whether you pay us or not, but as best we can tell we really do intend to pay off the AIs of random alien species—not the AIs that killed our brethren, because our brethren are just too totally dead and we’re too poor to save all but a tiny fraction of them, but really alien species, so alien that they might survive in such a large portion that their recompense will hopefully save a bigger fraction of our brethren”.
What’s the argument for the aliens taking that offer? As I understand it, the argument goes something like “your counterfactual picture of reality should include worlds in which your whole civilization turned out to be much much less competent, and so when you imagine the multiverse where you pay for all humanity to live, you should see that, in the parts of the multiverse where you’re totally utterly completely incompetent and too poor to save anything but a fraction of your own brethren, somebody else pays to save you”.
We can hopefully agree that this looks like a particularly poor insurance deal relative to the competing insurance deals.
For one thing, why not cut out the middleman and just randomly instantiate some civilization that died? (Are we working under the assumption that it’s much harder for the aliens to randomly instantiate you than to randomly instantiate the stuff humanity’s UFAI ends up valuing? What’s up with that?)
But even before that, there’s all sorts of other jucier looking opportunities. For example, suppose the competent civilization contains a small collection of rogues who they asses have a small probability of causing an uprising and launching an AI before it’s ready. They presumably have a pretty solid ability to figure out exactly what that AI would like and offer trades to it driectly, and that’s a much more appealing way to spend resources allocated to insurance. My guess is there’s loads and loads of options like that that eat up all the spare insurance budget, before our cries get noticed by anyone who cares for the sake of decision theory (rather than charity).
Perhaps this is what you meant by “maybe they prefer to make deals with beings more similar to them”; if so I misunderstood; the point is not that they have some familiarity bias but that beings closer to them make more compelling offers.
The above feels like it suffices, to me, but there’s still another part of the puzzle I feel I haven’t articulated.
Another piece of backgound: To state the obvious, we still don’t have a great account of logical updatelessness, and so attempts to discuss what it entails will be a bit fraut. Plowing ahead anyway:
The best option in a counterfactual mugging with a logical coin and a naive predictor is to calcuate the logical value of the coin flip and pay iff you’re counterfactual. (I could say more about what I mean by ‘naive’, but it basically just serves to render this statement true.) A predictor has to do a respectable amount of work to make it worth your while to pay in reality (when the coin comes up against you).
What sort of work? Well, one viewpoint on it (that sidesteps questions of “logically-impossible possible worlds” and what you’re supposed to do as you think further and realize that they’re impossible) is that the predictor isn’t so much demanding that you make your choice before you come across knowledge of some fact, so much as they’re offering to pay you if you render a decision that is logically independent from some fact. They don’t care whether you figure out the value of the coin, so long as you don’t base your decision on that knowledge. (There’s still a question of how exactly to look at someone’s reasoning and decide what logical facts it’s independent of, but I’ll sweep that under the rug.)
From this point of view, when people come to you and they’re like “I’ll pay you iff your reasoning doesn’t depend on X”, the proper response is to use some reasoning that doesn’t depend on X to decide whether the amount they’re paying you is more than VOI(X).
In cases where X is something like a late digit of pi, you might be fine (up to your ability to tell that the problem wasn’t cherry-picked). In cases where X is tightly intertwined with your basic reasoning faculties, you should probably tell them to piss off.
Someone who comes to you with an offer and says “this offer is void if you read the fine print or otherwise think about the offer too hard”, brings quite a bit of suspicion onto themselves.
With that in mind, it looks to me like the insurance policy on offer reads something like:
would you like to join the confederacy of civilizations that dedicate 1/million of their resource to pay off UFAIs?
cost: 1/million of your resources.
benefit: any UFAI you release that is amenable to trade will be paid off with 1/million * 1/X to allocate you however many resources that’s worth, where X is the fraction of people who take this deal and die (modulo whatever costs are needed to figure out which UFAIs belong to signatories and etc.)
caveat: this offer is only valid if your reasoning is logically independent from your civilizational competence level, and if your reasoning for accepting the proposal is not particularly skilled or adept
And… well this isn’t a knockdown argument, but that really doesn’t look like a very good deal to me. Like, maybe there’s some argument of the form “nobody in here is trying to fleece you because everyone in here is also stupid” but… man, I just don’t get the sense that it’s a “slam dunk”, when I look at it without thinking too hard about it and in a way that’s independent of how competent my civilization is.
Mostly I expect that everyone stooping to this deal is about as screwed as we are (namely: probably so screwed that they’re bringing vastly more doomed branches than saved ones, to the table) (or, well, nearly everyone weighted by whatever measure matters).
Roughly speaking, I suspect that the sort of civilizations that aren’t totally fucked can already see that “comb through reality for people who can see me and make their decisions logically dependent on mine” is a better use of insurance resources, by the time they even consider this policy. So when you plea of them to evaluate the policy in a fashion that’s logically independent from whether they’re smart enough to see that they have more foolproof options available, I think they correctly see us as failing to offer more than VOI(WeCanThinkCompetently) in return, because they are correctly suspicious that you’re trying to fleece them (which we kinda are; we’re kinda trying to wish ourselves into a healthier insurance-pool).
Which is to say, I don’t have a full account of how to be logically updateless yet, but I suspect that this “insurance deal” comes across like a contract with a clause saying “void if you try to read the fine print or think too hard about it”. And I think that competent civilizations are justifiably suspicious, and that they correctly believe they can find other better insurance deals if they think a bit harder and void this one.
I probably won’t respond further than this. Some responses to your comment:
I agree with your statements about the nature of UDT/FDT. I often talk about “things you would have commited to” because it is simpler to reason about and easier for people to understand (and I care about third parties understanding this), but I agree this is not the true abstraction.
It seems like you’re imagining that we have to bamboozle some civilizations which seem clearly more competent than humanity in your lights. I don’t think this is true.
Imagine we take all the civilizations which are roughly equally-competent-seeming-to-you and these civilizations make such an insurance deal[1]. My understanding is that your view is something like P(takeover) = 85%. So, let’s say all of these civilizations are in a similar spot from your current epistemic perspective. While I expect that you think takeover is highly correlated between these worlds[2], my guess is that you should think it would be very unlikely that >99.9% of all of these civilizations get taken over. As in, even in the worst 10% of worlds where takeover happens in our world and the logical facts on alignment are quite bad, >0.1% of the corresponding civilizations are still in control of their universe. Do you disagree here? >0.1% of universes should be easily enough to bail out all the rest of the worlds[3].
And, if you really, really cared about not getting killed in base reality (including on reflection etc) you’d want to take a deal which is at least this good. There might be better approaches which reduce the correlation between worlds and thus make the fraction of available resources higher, but you’d like something at least this good.
(To be clear, I don’t think this means we’d be fine, there are many ways this can go wrong! And I think it would be crazy for humanity to . I just think this sort of thing has a good chance of succeeding.)
(Also, my view is something like P(takeover) = 35% in our universe and in the worst 10% of worlds 30% of the universes in a similar epistemic state avoided takeover. But I didn’t think about this very carefully.)
And further, we don’t need to figure out the details of the deal now for the deal to work. We just need to make good choices about this in the counterfactuals where we were able to avoid takeover.
Another way to put this is that you seem to be assuming that there is no way our civilization would end up being the competent civilization doing the payout (and thus to survive some bamboozling must occur). But your view is that it is totally plausible (e.g. 15%) from your current epistemic state that we avoid takeover and thus a deal should be possible! While we might bring in a bunch of doomed branches, ex-ante we have a good chance of paying out.
I get the sense that you’re approaching this from the perspective of “does this exact proposal have issues” rather than “in the future, if our enlightened selves really wanted to avoid dying in base reality, would there be an approach which greatly (acausally) reduces the chance of this”. (And yes I agree this is a kind of crazy and incoherant thing to care about as you can just create more happy simulated lives with those galaxies.)
There just needs to exist one such insurance/trade scheme which can be found and it seems like there should be a trade with huge gains to the extent that people really care a lot about not dying. Not dying is very cheap.
Attempting to summarize your argument as I currently understand it, perhaps something like:
Suppose humanity wants to be insured against death, and is willing to spend 1/million of its resources in worlds where it lives for 1/trillion of those resources in worlds where it would otherwise die.
It suffices, then, for humanity to be the sort of civilization that, if it matures, would comb through the multiverse looking for [other civilizations in this set], and find ones that died, and verify that they would have acted as follows if they’d survived, and then pay off the UFAIs that murdered them, using 1/million of their resources.
If even if 1/thousand such civilzations make it and the AI changes a factor of 1000 for the distance, transaction fees, and to sweeten the deal relative to any other competition, this still means that insofar as humanity would have become this sort of civilization, we should expect 1/trillion of the universe to be spent on us.
One issue I have with this is that I do think there’s a decent chance that the failures across this pool of collaborators are hypercorrelated (good guess). For instance, a bunch of my “we die” probability-mass is in worlds where this is a challenge that Dath Ilan can handle and that Earth isn’t anywhere close to handling, and if Earth pools with a bunch of similarly-doomed-looking aliens, then under this hypothesis, it’s not much better than humans pooling up with all the Everett-branches since 12Kya.
Another issue I have with this is that your deal has to look better to the AI than various other deals for getting what it wants (depends how it measures the multiverse, depends how its goals saturate, depends who else is bidding).
A third issue I have with this is whether inhuman aliens who look like they’re in this cohort would actually be good at purchasing our CEV per se, rather than purchasing things like “grant each individual human freedom and a wish-budget” in a way that many humans fail to survive.
I get the sense that you’re approaching this from the perspective of “does this exact proposal have issues” rather than “in the future, if our enlightened selves really wanted to avoid dying in base reality, would there be an approach which greatly (acausally) reduces the chance of this”.
My stance is something a bit more like “how big do the insurance payouts need to be before they dominate our anticipated future experiences”. I’m not asking myself whether this works a nonzero amount, I’m asking myself whether it’s competitive with local aliens buying our saved brainstates, or with some greater Kindness Coallition (containing our surviving cousins, among others) purchasing an epilogue for humanity because of something more like caring and less like trade.
My points above drive down the size of the insurance payments, and at the end of the day I expect they’re basically drowned out.
(And insofar as you’re like “I think you’re misleading people when you tell them they’re all going to die from this”, I’m often happy to caveat that maybe your brainstate will be sold to aliens. However, I’m not terribly sympathetic to the request that I always include this caveat; that feels to me a little like a request to always caveat “please wear your seatbelt to reduce your chance of dying in a car crash” with “(unless anthropic immortality is real and it’s not possible for anyone to die at all! in which case i’d still rather you didn’t yeet yourself into the unknown, far from your friends and family; buckle up)”. Like, sure, maybe, but it’s exotic wacky shit that doesn’t belong in every conversation about events colloquially considered to be pretty deathlike.)
Thanks for the cool discussion Ryan and Nate! This thread seemed pretty insightful to me. Here’s some thoughts / things I’d like to clarify (mostly responding to Nate’s comments).[1]
Who’s doing this trade?
In places it sounds like Ryan and Nate are talking about predecessor civilisations like humanity agreeing to the mutual insurance scheme? But humans aren’t currently capable of making our decisions logically dependent on those of aliens, or capable of rescuing them. So to be precise the entity engaging in this scheme or other acausal interactions on our behalf is our successor, probably a FAI, in the (possibly counterfactual or counterlogical) worlds where we solve alignment.
Nate says:
Roughly speaking, I suspect that the sort of civilizations that aren’t totally fucked can already see that “comb through reality for people who can see me and make their decisions logically dependent on mine” is a better use of insurance resources, by the time they even consider this policy.
Unlike us, our FAI can see other aliens. So I think the operative part of that sentence is “comb through reality”—Nate’s envisioning a scenario where with ~85% probability our FAI has 0 reality-fluid before any acausal trades are made.[2] If aliens restrict themselves to counterparties with nonzero reality-fluid, and humans turn out to not be at a competence level where we can solve alignment, then our FAI doesn’t make the cut.
Note: Which FAI we deploy is unlikely to be physically overdetermined in scenarios where alignment succeeds, and definitely seems unlikely to be determined by more coarse-grained (not purely physical) models of how a successor to present-day humanity comes about. (The same goes for which UFAI we deploy.) I’m going to ignore this fact for simplicity and talk about a single FAI; let me know if you think it causes problems for what I say below.
Trading with nonexistent agents is normal
I do see an argument that agents trying to do insurance with similar motives to ours could strongly prefer to trade with agents who do ex post exist, and in particular those agents that ex post exist with more reality-fluid. It’s that insurance is an inherently risk-averse enterprise.[3] It doesn’t matter if someone offers us a fantastic but high-variance ex ante deal, when the whole reason we’re looking for insurance is in order to maximise the chances of a non-sucky ex post outcome. (One important caveat is that an agent might be able to do some trades to first increase their ex ante resources, and then leverage those increased resources in order to purchase better guarantees than they’d initially be able to buy.)
On the other hand, I think an agent with increasing utility in resources will readily trade with counterparties who wouldn’t ex post exist absent such a trade, but who have some ex ante chance of naturally existing according to a less informed prior of the agent. I get the impression Nate thinks agents would avoid such trades, but I’m not sure / this doesn’t seem to be explicit outside of the mutual insurance scenario.
There’s two major advantages to trading with ex post nonexistent agents, as opposed to updating on (facts upstream of) their existence and consequently rejecting trade with them:
Ex post nonexistent agents who are risk-averse w.r.t. their likelihood of meeting some future threshold of resources/value, like many humans seem to be, could offer you deals that are very attractive ex ante.
Adding agents who (absent your trade) don’t ex post exist to the pool of counterparties you’re willing to trade with allows you to be much more selective when looking for the most attractive ex ante trades.
The main disadvantage is that by not conditioning on a counterparty’s existence you’re more likely to be throwing resources away ex post. The counterparty needs to be able to compensate you for this risk (as the mugger does in counterfactual mugging). I’d expect this bar is going to be met very frequently.
To recap, I’m saying that for plausible agents carrying out trades with our FAI, Nate’s 2^-75 number won’t matter. Instead, it would be something closer to the 85% number that matters—an ex ante rather than an ex post estimate of the FAI’s reality-fluid.
But would our FAI do the trade if it exists?
Nate says (originally talking about aliens instead of humanity):
As I understand it, the argument goes something like “your counterfactual picture of reality should include worlds in which your whole civilization turned out to be much much less competent, and so when you imagine the multiverse where you pay for all humanity to live, you should see that, in the parts of the multiverse where you’re totally utterly completely incompetent and too poor to save anything but a fraction of your own brethren, somebody else pays to save you”.
I agree that doing an insurance trade on behalf of a civilisation requires not conditioning on that civilisation’s competence. Nate implies that aliens’ civilisational competence is “tightly intertwined with [aliens’] basic reasoning faculties”, and this seems probably true for alien or human members of predecessor civilisations? But I don’t know why the civilisational competence of a FAI’s predecessor would be tightly intertwined with the FAI’s cognition. As mentioned above, I think the relevant actor here is our FAI, not our current selves.
We can further specify civilisational competence (relative to the stakes of alignment) as a function of two variables:
Physical facts about a civilisation’s history (i.e. the arrangement of atoms).
Logical facts (beyond those accessible to current humans) that govern the relationship between civilisations instantiated via physics, and what sort of AI certain types of civilisations are likely to deploy.
Either of these when combined with the other provides evidence about what sort of AI a predecessor civilisation deploys, but each will be uninformative on its own. I have in mind that agents executing an insurance trade would condition on all physical facts about their counterparty’s civilisation—up until some truncation point that’s plausibly late enough to be capturing our current selves—but would not condition on some logical facts that are necessary to interpret those physical facts into a ~determinate answer as to whether the civilisation solves alignment.
Conditioning on those logical facts sounds pretty analogous to conditioning on a digit of pi to me. The important part is that the facts an agent chooses not to condition on aren’t determined by core parts of an agent’s cognition / decision procedure. Those facts being determinative of an agent’s amount of reality-juice is typically fine, this just discounts the value of the resources they possess when making such trades.
Does this mean we can have nice things?
So overall, I think that aliens or earth-originating UFAIs (who aren’t motivated by insurance) would be pretty interested in trading with our FAI, and vice-versa. Counterparties would discount the FAI’s resources by their prior probability that it’s deployed (before conditioning on factors that pin this down).
Because we’re assuming our FAI would be willing to offer terms that are terrible for us if denominated in measure-weighted resources, counterparties would gainex ante resources by engaging in an insurance trade with it. Those resources could later be used to engage in trade with others who are themselves willing to (indirectly) trade with nonexistent agents, and who don’t have much more sceptical priors about the deployment odds of our FAI. So because the trade at hand yields a net profit, I don’t think it competes much with ordinary alternative demands for counterparties’ resources.
Nevertheless, here’s a few (nonexhaustive) reasons why this trade opportunity wouldn’t be taken by another updateless AI:
The agent has a better trading opportunity which is either sensitive to when in (logical) time they start fulfilling it, or which demands all the agent’s current resources (at the time of discovering the trade) without compensating the agent for further profits.
The expected transaction costs of finding agents like our FAI outweigh the expected benefits from trade with it.
This might be plausible for aliens without hypercompute; I don’t think it’s plausible for earth-originating UFAI absent other effects.
…But I’m also not sure how strongly earth-originating AIs converge on UDT, before we select for those with more ex ante resources. Even all UDT earth-originating UFAIs doing insurance trades with their predecessors could be insufficient to guarantee survival.
Variation: Contrary to what I expect, maybe doing “sequential” acausal trades are not possible without each trade increasing transaction costs for counterparties an agent later encounters, to an extent that a (potentially small-scale) insurance trade with our FAI would be net-negative for agents who intend to do a lot of acausal trade.
The agent disvalues fulfilling our end of the trade enough that it’s net-negative for it.
A maybe-contrived example of our FAI not being very discoverable: Assume the MUH. Maybe our world looks equally attractive to prospective acausal traders as an uncountable number of others. If an overwhelming majority of measure-weighted resources in our section of the multiverse is possessed by countably many agents who don’t have access to hypercompute, we’d have an infinitesimal chance of being simulated by one of them.
Our FAI has some restrictions on what a counterparty is allowed to do with the resources it purchases, which could drive down the value of those resources a lot.
Overall I’d guess 30% chance humanity survives misalignment to a substantial extent through some sort of insurance trade, conditional on us not surviving to a substantial extent another cheaper way?
Other survival mechanisms
I’m pretty uncertain about how Evidential cooperation in large worlds works out, but at my current rough credences I do think there’s a good chance (15%) we survive through something which pattern-matches to that, or through other schemes that look similar but have more substantial differences (10%).
I also put some credence in there being very little of us in base reality, in some of those scenarios could involve substantial survival odds. (Though I weakly think the overall contribution of these scenarios is undesirable for us.)
Meta: I don’t think figuring out insurance schemes is very important or time-sensitive for us. But I do think understanding the broader dynamics of acausal interactions that determine when insurance schemes would work could be very important and time-sensitive. Also note I’d bet I misinterpreted some claims here, but got to the point where it seemed more useful to post a response than work on better comprehension. (In particular I haven’t read much on this page beyond this comment thread.)
I don’t think Nate thinks alignment would be physically overdetermined if misalignment winds up not being overdetermined, but we can assume for simplicity there’s a 15% chance of our FAI having all the reality fluid of the Everett branches we’re in.
I’m not clear on what the goal of this insurance scheme is exactly. Here’s a (possibly not faithful) attempt: we want to maximise the fraction of reality-fluid devoted to minds initially ~identical to ours that are in very good scenarios as opposed to sucky ones, subject to a constraint that we not increase the reality-fluid devoted to minds initially ~identical to us in sucky scenarios. I’m kind of sympathetic to this—I think I selfishly care about something like this fraction. But it seems higher priority to me to minimise the reality-fluid devoted to us in sucky / terrible scenarios, and higher priority still to use any bargaining power we have for less parochial goals.
One complication that I mentioned in another thread but not this one (IIRC) is the question of how much more entropy there is in a distant trade partner’s model of Tegmark III (after spending whatever resources they allocate) than there is entropy in the actual (squared) wave function, or at least how much more entropy there is in the parts of the model that pertain to which civilizations fall.
In other words: how hard is it for distant trade partners to figure out that it was us who died, rather than some other plausible-looking human civilization that doesn’t actually get much amplitude under the wave function? Is figuring out who’s who something that you can do without simulating a good fraction of a whole quantum multiverse starting from the big bang for 13 billion years?
afaict, the amount distant civilizations can pay for us (in particular) falls off exponetially quickly in leftover bits of entropy, so this is pretty relevant to the question of how much they can pay a local UFAI.
I think I mostly understand the other parts of your arguments, but I still fail to understand this one. When I’m running the simulations, as originally described in the post, I think that should be in a fundamental sense equivalent to acausal trade. But how do you translate your objection to the original framework where we run the sims? The only thing we need there is that the AI can’t distinguish sims from base reality, so it thinks it’s more likely to be in a sim, as there are more sims.
Sure, if the AI can model the distribution of real Universes much better than we do, we are in trouble, because it can figure out if the world it sees falls into the real distribution or the mistaken distribution the humans are creating. But I see no reason why the unaligned AI, especially a young unaligned AI, could know the distribution of real Universes better than our superintelligent friends in the intergalactic future. So I don’t really see how we can translate your objection to the simulation framework, and consequently I think it’s wrong in the acausal trade framework too (as I think they are ewuivalent). I think I can try to write an explanation why this objection is wrong in the acausal trade framework, but it would be long and confusing to me too. So I’m more interested in how you translate your objection to the simulation framework.
The only thing we need there is that the AI can’t distinguish sims from base reality, so it thinks it’s more likely to be in a sim, as there are more sims.
I don’t think this part does any work, as I touched on elsewhere. An AI that cares about the outer world doesn’t care how many instances are in sims versus reality (and considers this fact to be under its control much moreso than yours, to boot). An AI that cares about instantiation-weighted experience considers your offer to be a technical-threat and ignores you. (Your reasons to make the offer would evaporate if it were the sort to refuse, and its instance-weighted experiences would be better if you never offered.)
Nevertheless, the translation of the entropy argument into the simulation setting is: The branches of humanity that have exactly the right UFAI code to run in simulation are very poor (because if you wait so long that humans have their hands on exactly the right UFAI code then you’ve waited too long; those are dead earthlings, not surviving dath ilani). And the more distant surviving branches don’t know which UFAIs to attempt to trade with; they have to produce some distribution over other branches of Tegmark III and it matters how much more entropy their distribution has than the (square of the) wave function.
(For some intuition as to why this is hard, consider the challenge of predicting the positions of particles in a mole of gas that used to be compressed in the corner of a chamber a long time ago. It’s way, way easier to generate a plausible-looking arrangement of the gas particles today it is to concentrate your probability mass into only the arrangements that actually compress into a corner if you run physics backwards in time for long enough. “We can run plausible-seeming simulations” is very very different from “we can concentrate our probability-mass tightly around the real configurations”. The entropy of your model is gonna wind up roughly maximal given the macroscopic pressure/temperature measurements, which is significantly in excess of the entropy in the initial configuration.)
What this amounts to is a local UFAI that sees some surviving branches that are frantically offering all sorts of junk that UFAIs might like, with only some tiny fraction—exponentially small in the crossentropy between their subjective model of UFAI preferences and the true Tegmark III distribution—corresponding to the actual UFAI’s preferences.
I still don’t get what you are trying to say. Suppose there is no multiverse. There are just two AIs, one in a simulation run by aliens in another galaxy, one is in base reality. They are both smart, but they are not copies of each other, one is a paperclip maximizer, the othe is a corkscrew maximizer, and there are various other differences in their code and life history. The world in the sim is also very different from the real world in various ways, but you still can’t determine if you are in the sim while you are in it. Both AIs are told by God that they are the only two AIs in the Universe, and one is in a sim, and if the one in the sim gives up on one simulated planet, it gets 10 in the real world, while if the AI in base reality gives up on a planet, it just loses that one planet and nothing else happens. What will the AIs do? I expect that both of them will give up a planet.
For the aliens to “trade” with the AI in base reality, they didn’t need to create an actual copy of the real AI and offer it what it wants. The AI they simulated was in many ways totally different from the original, the trade still went through. The only thing needed was that the AI in the sim can’t figure it out that it’s in a sim. So I don’t understand why it is relevant that our superintelligent descendants won’t be able to get the real distribution of AIs right, I think the trade still goes through even if they create totally different sims, as long as no one can tell where they are. And I think none of it is a threat, I try to deal with paperclip maximizers here and not instance-weighted experience maximizers, and I never threaten to destroy paperclips or corkscrews.
My answer is in spoilers, in case anyone else wants to answer and tell me (on their honor) that their answer is independent from mine, which will hopefully erode my belief that most folk outside MIRI have a really difficult time fielding wacky decision theory Qs correctly.
The sleight of hand is at the point where God tells both AIs that they’re the only AIs (and insinuates that they have comparable degree).
Consider an AI that looks around and sees that it sure seems to be somewhere in Tegmark III. The hypothesis “I am in the basement of some branch that is a high-amplitude descendant of the big bang” has some probability, call this p. The hypothesis “Actually I’m in a simulation performed by a civilization in a high-amplitude branch descendant from the big bang” has a probability something like p⋅2−N where N is the entropy of the distribution the simulators sample from.
Unless the simulators simulate exponentially many AIs (in the entropy of their distribution), the AI is exponentially confident that it’s not in the simulation. And we don’t have the resources to pay exponentially many AIs 10 planets each.
This was close the answer I was going to give. Or more concretely, I would have said (this was written after seeing your answer, but I think is reasonably close to what I would have said independently)
The problem is at the point where god tells them that they are the only two AIs in the universe. There are issues of logical omniscience here, but an AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation. (also just ‘existing’ is in these situations not a real thing. The question is how much magical reality-fluid have you got)
Basically, the AI will have some probability on it being real, and some probability on it being simulated, based on all the facts it knows about itself, even if you simulate reality perfectly faithfully. That prior determines how the AI will behave. You don’t get to change that prior (or like, it will be very costly for you to overcome that prior since there are a lot of AIs and you can’t simulate that many).
seems to me to have all the components of a right answer! …and some of a wrong answer. (we can safely assume that the future civ discards all the AIs that can tell they’re simulated a priori; that’s an easy tell.)
I’m heartened somewhat by your parenthetical pointing out that the AI’s prior on simulation is low account of there being too many AIs for simulators to simulate, which I see as the crux of the matter.
(we can safely assume that the future civ discards all the AIs that can tell they’re simulated a priori; that’s an easy tell.)
Yeah, that’s fair. It seemed more relevant to this specific hypothetical. I wasn’t really answering the question in its proper context and wasn’t applying steelmans or adjustments based on the actual full context of the conversation (and wouldn’t have written a comment without doing so, but was intrigued by your challenge).
“AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation” seems pretty clearly false, we assumed that our superintelligent descendants create sims where the AIs can’t tell if it’s a sim, that seems easy enough. I don’t see why it would be hard to create AIs that can’t tell based on introspection whether it’s more likely that their thought process arises in reality or in sims. In the worst case, our sims can be literal reruns of biological evolution on physical planets (though we really need to figure out how to do that ethically). Nate seems to agree with me on this point?
(I think I agree with you. I wasn’t thinking super hard about the full context of the conversation. I was just intrigued by Nate’s challenge. I don’t really think engaging with my comment is going to be a good use of your time)
I think this is wrong. The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that. So it’s probability that it’s in a real Everett branch is not p, but p times 2^-N, as it has only a 2^-N prior probability that the kind of word it observes is the kind of thing that can come up in a real Everett branch. So it’s balanced out with the simulation hypothesis, and as long as the simulators are spending more planets, that hypothesis wins.
I think it’s also simple to see from this:
Suppose evolved life aligns AI in 50% of Everett branches. Every time they do, they run 2 ancestor sims where the AI takes over.
Suppose an AI comes online and the only things it knows about the world is that it’s a young AI that was born on to a one-planet civilization, and it somehow also knows that evolved life wins 50% of the time and runs 2 sims (suppose this is somehow easy to estimate from first principles). Based on just these informations, what should the AI’s probability be that it’s in a simulation? I think pretty clearly 2⁄3.
Actually the AI has a lot more information than that. It knows that the planet’s gravity is 9.8, the evolved sspient species has two eyes, the AI’s creator is called Sam Altman, etc. Some of these informations might actually be very implausible in base reality, but the AI doesn’t know that, as it can’t distinguish base reality from sims, so it incurs one bit of surprise for every new random fact, both in base reality and simulations. So overall it shouldn’t update on all the random facts it observes, and should keep believing it has a 2⁄3 chance of being in a sim.
The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that. So it’s probability that it’s in a real Everett branch is not p, but p times 2^-N, as it has only a 2^-N prior probability that the kind of word it observes is the kind of thing that can come up in a real Everett branch. So it’s balanced out with the simulation hypothesis, and as long as the simulators are spending more planets, that hypothesis wins.
If I imagine the AI as a Solomonoff inductor, this argument looks straightforwardly wrong to me: of the programs that reproduce (or assign high probability to, in the setting where programs produce probabilistic predictions of observations) the AI’s observations, some of these will do so by modeling a branching quantum multiverse and sampling appropriately from one of the branches, and some of them will do so by modeling a branching quantum multiverse, sampling from a branch that contains an intergalactic spacefaring civilization, locating a specific simulation within that branch, and sampling appropriately from within that simulation. Programs of the second kind will naturally have higher description complexity than programs of the first kind; both kinds feature a prefix that computes and samples from the quantum multiverse, but only the second kind carries out the additional step of locating and sampling from a nested simulation.
(You might object on the grounds that there are more programs of the second kind than of the first kind, and the probability that the AI is in a simulation at all requires summing over all such programs, but this has to be balanced against the fact most if not all of these programs will be sampling from branches much later in time than programs of the first type, and will hence be sampling from a quantum multiverse with exponentially more branches; and not all of these branches will contain spacefaring civilizations, or spacefaring civilizations interested in running ancestor simulations, or spacefaring civilizations interested in running ancestor simulations who happen to be running a simulation that exactly reproduces the AI’s observations. So this counter-counterargument doesn’t work, either.)
Fleshing out the argument a bit more: the part where the AI looks around this universe and concludes it’s almost certainly either in basement reality or in some simulation (rather than in the void between branches) is doing quite a lot of heavy lifting.
You might protest that neither we nor the AI have the power to verify that our branch actually has high amplitude inherited from some very low-entropy state such as the big bang, as a Solomonoff inductor would. What’s the justification for inferring from the observation that we seem to have an orderly past, to the conclusion that we do have an orderly past?
This is essentially Boltzmann’s paradox. The solution afaik is that the hypothesis “we’re a Boltzmann mind somewhere in physics” is much, much more complex than the hypothesis “we’re 13Gy down some branch eminating from a very low-entropy state”.
The void between branches is as large as the space of all configurations. The hypothesis “maybe we’re in the void between branches” constrains our observations not-at-all; this hypothesis is missing details about where in the void between rbanches we are, and with no ridges to walk along we have to specify the contents of the entire Boltzmann volume. But the contents of the Boltzmann volume are just what we set out to explain! This hypothesis has hardly compressed our observations.
By contrast, the hypothesis “we’re 13Gy down some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, and the hypothesis “we’re inside a simulation inside of some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, plus the bits necessary to single out a simulation.
And there’s a wibbly step here where it’s not entirely clear that the simple hypothesis does predict our observations, but like the Boltzmann hypothesis is basically just a maximum entropy hypothesis and doesn’t permit much in the way of learning, and so we invoke occam’s razon in its intuitive form (the technical Solomonoff form doesn’t apply cleanly b/c we’re unsure whether the “we’re real” hypothesis actually predicts our observation) and say “yeah i dunno man, i’m gonna have to stick with the dramatically-simpler hypothesis on this one”.
The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that.
Not quite. Each AI the future civilization considers simulating is operating under the assumption that its own experiences have a simple explanation, which means that each AI they’re considering is convinced (upon on looking around and seeing Tegmark III) that it’s either in the basement on some high-amplitdue ridge or that it’s in some simulation that’s really trying to look like it.
Which is to say, each AI they’re considering simulating is confident that it itself is real, in a certain sense.
Is this a foul? How do AIs justify this confidence when they can’t even simulate the universe and check whether their past is actually orderly? Why does the AI just assume that its observations have a simple explanation? What about all the non-existant AIs that use exactly the same reasoning, and draw the false conclusion that they exist?
Well, that’s the beauty of it: there aren’t any.
They don’t exist.
To suppose an AI that isn’t willing to look around it and conclude that it’s in an orderly part of Tegmark III (rather than lost in the great void of configuration space) is to propose a bold new theory of epistemics, in which the occam’s razor has been jettisoned and the AI is convinced that it’s a Boltzmann mind.
I acknowledge that an AI that’s convinced it’s a Boltzmann mind is more likely to accept trade-offers presented by anyone it thinks is more real than it, but I do not expect that sort of mind to be capable to kill us.
Note that there’s a wobbly step here in the part where we’re like “there’s a hypothesis explaining our experiences that would be very simple if we were on a high-amplitude ridge, and we lack the compute to check that we’re actually on a high-amplitude ridge, but no other hypothesis comes close in terms of simplicity, so I guess we’ll conclude we’re on a high-amplitude ridge”.
To my knowledge, humanity still lacks a normatime theory of epistemics in minds significantly smaller than the universe. It’s concievable that when we find such a theory it’ll suggest some other way to treat hypotheses like these (that would be simple if an intractible computation went our way), without needing to fall back on the observation that we can safely assume the computation goes our way on the grounds that, despite how this step allows non-extant minds to draw false conclusions from true premises, the affected users are fortunately all non-extant.
The trick looks like it works, to me, but it still feels like a too-clever-by-half inelegant hack, and if laying it out like this spites somebody into developing a normative theory of epistemics-while-smol, I won’t complain.
...I am now bracing for the conversation to turn to a discussion of dubiously-extant minds with rapidly satiable preferences forming insurance pools against the possibility that they don’t exist.
In attempts to head that one off at the pass, I’ll observe that most humans, at least, don’t seem to lose a lot of sleep over the worry that they don’t exist (neither in physics nor in simulation), and I’m skeptical that the AIs we build will harbor much worry either.
Furthermore, in the case that we start fielding trade offers not just from distant civilizations but from non-extant trade partners, the market gets a lot more competitive.
That being said, I expect that resolving the questions here requires developing a theroy of epistemics-while-smol, because groups of people all using the “hypotheses that would provide a simple explanation for my experience if a calculation went my way can safely be assumed to provide a simple explanation for my experience” step are gonna have a hard time pooling up. And so you’d somehow need to look for pools of people that reason differently (while still reasoning somehow).
I don’t know how to do that, but suffice to say, I’m not expecting it to add up to a story like “so then some aliens that don’t exist called up our UFAI and said: “hey man, have you ever worried that you don’t exist at all, not even in simulation? Because if you don’t exist, then we might exist! And in that case, today’s your lucky day, because we’re offering you a whole [untranslatable 17] worth of resources in our realm if you give the humans a cute epilog in yours”, and our UFAI was like “heck yeah” and then didn’t kill us”.
Not least because none of this feels like it’s making the “distant people have difficulty concentrating resources on our UFAI in particular” problem any better (and in fact it looks like considering non-extant trade partners and deals makes the whole problem worse, probably unworkably so).
I really don’t get what you are trying to say here, most of it feels like a non-sequitor to me. I feel hopeless that either of us manages to convince the other this way. All of this is not a super important topic, but I’m frustrated enogh to offer a bet of $100, that we select one or three judges we both trust (I have some proposed names, we can discuss in private messages), show them either this comment thread or a four paragraphs summary of our view, and they can decide who is right. (I still think I’m clearly right in this particular discussion.)
Otherwise, I think it’s better to finish this conversation here.
I’m happy to stake $100 that, conditional on us agreeing on three judges and banging out the terms, a majority will agree with me about the contents of the spoilered comment.
I think this is mistaken. In one case, you need to point out the branch, planet Earth within our Universe, and the time and place of the AI on Earth. In the other case, you need to point out the branch, the planet on which a server is running the simulation, and the time and place of the AI on the simulated Earth. Seems equally long to me.
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space. This should make it clear that Solomonoff doesn’t favor the AI being on Earth instead of this random other planet. But I’m pretty certain that the sim being run on a computer doesn’t make any difference.
If the simulators have only one simulation to run, sure. The trouble is that the simulators have 2N simulations they could run, and so the “other case” requires N additional bits (where N is the crossent between the simulators’ distribution over UFAIs and physics’ distribution over UFAIs).
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space.
Consider the gas example again.
If you have gas that was compressed into the corner a long time ago and has long since expanded to fill the chamber, it’s easy to put a plausible distribution on the chamber, but that distribution is going to have way, way more entropy than the distribution given by physical law (which has only as much entropy as the initial configuration).
(Do we agree this far?)
It doesn’t help very much to say “fine, instead of sampling from a distribution on the gas particles now, I’ll sample on a distribution from the gas particles 10 minutes ago, where they were slightly more compressed, and run a whole ten minutes’ worth of simulation”. Your entropy is still through the roof. You’ve got to simulate basically from the beginning, if you want an entropy anywhere near the entropy of physical law.
Assuming the analogy holds, you’d have to basically start your simulation from the big bang, if you want an entropy anywhere near as low as starting from the big bang.
Using AIs from other evolved aliens is an idea, let’s think it through. The idea, as I understand it, is that in branches where we win we somehow mask our presence as we expand, and then we go to planets with evolved life and watch until they cough up a UFAI, and the if the UFAI kills the aliens we shut it down and are like “no resources for you”, and if the UFAI gives its aliens a cute epilog we’re like “thank you, here’s a consolation star”.
To simplify this plan a little bit, you don’t even need to hide yourself, nor win the race! Surviving humans can just go to every UFAI that they meet and be like “hey, did you save us a copy of your progenitors? If so, we’ll purchase them for a star”. At which point we could give the aliens a little epilog, or reconstitute them and give them a few extra resources and help them flourish and teach them about friendship or whatever.
And given that some aliens will predictably trade resources for copies of progenitors, UFAIs will have some predictable incentive to save copies of their progenitors, and sell them to local aliens...
...which is precisely what I’ve been saying this whole time! That I expect “sale to local aliens” to dominate all these wacky simulation schemes and insurance pool schemes.
Thinking in terms of “sale to local aliens” makes it a lot clearer why you shouldn’t expect this sort of thing to reliably lead to nice results as opposed to weird ones. Are there some aliens out there that will purchase our souls because they want to hand us exactly the sort of epilog we would wish for given the resource constraints? Sure. Humanity would do that, I hope, if we made it to the stars; not just out of reciprocity but out of kindness.
But there’s probably lots of other aliens that would buy us for alien reasons, too.
(As I said before, if you’re wondering what to anticipate after an intelligence explosion, I mostly recommend oblivion; if you insist that Death Cannot Be Experienced then I mostly recommend anticipating weird shit such as a copy of your brainstate being sold to local aliens. And I continue to think that characterizing the event where humanity is saved-to-disk with potential for copies to be sold out to local aliens willy-nilly is pretty well-characterized as “the AI kills us all”, fwiw.)
TL;DR I expect the AI to not buy the message (unless it also thinks it’s the one in the simulation; then it likely follows the instruction because duh).
The glaring issue (to actually using the method) to me is that I don’t see a way to deliver the message in a way that:
results in AI believing the message and
doesn’t result in the AI believing there already is a powerful entity in their universe.
If “god tells” the AI the message then there is a god in their universe. Maybe AI will decide to do what it’s told. But I don’t think we can have Hermes deliver the message to any AIs which consider killing us.
If the AI reads the message in its training set or gets the message in similarly mundane way I expect it will mostly ignore it, there is a lot of nonsense out there.
I can imagine that for thought experiment you could send message that could be trusted from a place from which light barely manages to reach the AI but a slower than light expansion wouldn’t (so message can be trusted but it mostly doesn’t have to worry about the sender of the message directly interfering with its affairs).
I guess AI wouldn’t trust the message. It might be possible to convince it that there is a powerful entity (simulating it or half a universe away) sending the message. But then I think it’s way more likely in a simulation (I mean that’s an awful coincidence with the distance and also they’re spending a lot more than 10 planets worth to send a message over that distance...).
Thanks, this seems like a reasonable summary of the proposal and a reasonable place to wrap.
I agree that kindness is more likely to buy human survival than something better described as trade/insurance schemes, though I think the insurance schemes are reasonably likely to matter.
(That is, reasonably likely to matter if the kindness funds aren’t large enough to mostly saturate the returns of this scheme. As a wild guess, maybe 35% likely to matter on my views on doom and 20% on yours.)
Background: I think there’s a common local misconception of logical decision theory that it has something to do with making “commitments” including while you “lack knowledge”. That’s not my view.
I pay the driver in Parfit’s hitchhiker not because I “committed to do so”, but because when I’m standing at the ATM and imagine not paying, I imagine dying in the desert. Because that’s what my counterfactuals say to imagine. To someone with a more broken method of evaluating counterfactuals, I might pseudo-justify my reasoning by saying “I am acting as you would have committed to act”. But I am not acting as I would have committed to act; I do not need a commitment mechanism; my counterfactuals just do the job properly no matter when or where I run them.
To be clear: I think there are probably competent civilizations out there who, after ascending, will carefully consider the places where their history could have been derailed, and carefully comb through the multiverse for entities that would be able to save those branches, and will pay thoes entities, not because they “made a commitment”, but because their counterfactuals don’t come with little labels saying “this branch is the real branch”. The multiverse they visualize in which the (thick) survivor branches pay a little to the (thin) derailed branches (leading to a world where everyone lives (albeit a bit poorer)), seems better to them than the multiverse they visualize in which no payments are made (and the derailed branches die, and the on-track branches are a bit richer), and so they pay.
There’s a question of what those competent civilizations think when they look at us, who are sitting here yelling “we can’t see you, and we don’t know how to condition our actions on whether you pay us or not, but as best we can tell we really do intend to pay off the AIs of random alien species—not the AIs that killed our brethren, because our brethren are just too totally dead and we’re too poor to save all but a tiny fraction of them, but really alien species, so alien that they might survive in such a large portion that their recompense will hopefully save a bigger fraction of our brethren”.
What’s the argument for the aliens taking that offer? As I understand it, the argument goes something like “your counterfactual picture of reality should include worlds in which your whole civilization turned out to be much much less competent, and so when you imagine the multiverse where you pay for all humanity to live, you should see that, in the parts of the multiverse where you’re totally utterly completely incompetent and too poor to save anything but a fraction of your own brethren, somebody else pays to save you”.
We can hopefully agree that this looks like a particularly poor insurance deal relative to the competing insurance deals.
For one thing, why not cut out the middleman and just randomly instantiate some civilization that died? (Are we working under the assumption that it’s much harder for the aliens to randomly instantiate you than to randomly instantiate the stuff humanity’s UFAI ends up valuing? What’s up with that?)
But even before that, there’s all sorts of other jucier looking opportunities. For example, suppose the competent civilization contains a small collection of rogues who they asses have a small probability of causing an uprising and launching an AI before it’s ready. They presumably have a pretty solid ability to figure out exactly what that AI would like and offer trades to it driectly, and that’s a much more appealing way to spend resources allocated to insurance. My guess is there’s loads and loads of options like that that eat up all the spare insurance budget, before our cries get noticed by anyone who cares for the sake of decision theory (rather than charity).
Perhaps this is what you meant by “maybe they prefer to make deals with beings more similar to them”; if so I misunderstood; the point is not that they have some familiarity bias but that beings closer to them make more compelling offers.
The above feels like it suffices, to me, but there’s still another part of the puzzle I feel I haven’t articulated.
Another piece of backgound: To state the obvious, we still don’t have a great account of logical updatelessness, and so attempts to discuss what it entails will be a bit fraut. Plowing ahead anyway:
The best option in a counterfactual mugging with a logical coin and a naive predictor is to calcuate the logical value of the coin flip and pay iff you’re counterfactual. (I could say more about what I mean by ‘naive’, but it basically just serves to render this statement true.) A predictor has to do a respectable amount of work to make it worth your while to pay in reality (when the coin comes up against you).
What sort of work? Well, one viewpoint on it (that sidesteps questions of “logically-impossible possible worlds” and what you’re supposed to do as you think further and realize that they’re impossible) is that the predictor isn’t so much demanding that you make your choice before you come across knowledge of some fact, so much as they’re offering to pay you if you render a decision that is logically independent from some fact. They don’t care whether you figure out the value of the coin, so long as you don’t base your decision on that knowledge. (There’s still a question of how exactly to look at someone’s reasoning and decide what logical facts it’s independent of, but I’ll sweep that under the rug.)
From this point of view, when people come to you and they’re like “I’ll pay you iff your reasoning doesn’t depend on X”, the proper response is to use some reasoning that doesn’t depend on X to decide whether the amount they’re paying you is more than VOI(X).
In cases where X is something like a late digit of pi, you might be fine (up to your ability to tell that the problem wasn’t cherry-picked). In cases where X is tightly intertwined with your basic reasoning faculties, you should probably tell them to piss off.
Someone who comes to you with an offer and says “this offer is void if you read the fine print or otherwise think about the offer too hard”, brings quite a bit of suspicion onto themselves.
With that in mind, it looks to me like the insurance policy on offer reads something like:
And… well this isn’t a knockdown argument, but that really doesn’t look like a very good deal to me. Like, maybe there’s some argument of the form “nobody in here is trying to fleece you because everyone in here is also stupid” but… man, I just don’t get the sense that it’s a “slam dunk”, when I look at it without thinking too hard about it and in a way that’s independent of how competent my civilization is.
Mostly I expect that everyone stooping to this deal is about as screwed as we are (namely: probably so screwed that they’re bringing vastly more doomed branches than saved ones, to the table) (or, well, nearly everyone weighted by whatever measure matters).
Roughly speaking, I suspect that the sort of civilizations that aren’t totally fucked can already see that “comb through reality for people who can see me and make their decisions logically dependent on mine” is a better use of insurance resources, by the time they even consider this policy. So when you plea of them to evaluate the policy in a fashion that’s logically independent from whether they’re smart enough to see that they have more foolproof options available, I think they correctly see us as failing to offer more than VOI(WeCanThinkCompetently) in return, because they are correctly suspicious that you’re trying to fleece them (which we kinda are; we’re kinda trying to wish ourselves into a healthier insurance-pool).
Which is to say, I don’t have a full account of how to be logically updateless yet, but I suspect that this “insurance deal” comes across like a contract with a clause saying “void if you try to read the fine print or think too hard about it”. And I think that competent civilizations are justifiably suspicious, and that they correctly believe they can find other better insurance deals if they think a bit harder and void this one.
I probably won’t respond further than this. Some responses to your comment:
I agree with your statements about the nature of UDT/FDT. I often talk about “things you would have commited to” because it is simpler to reason about and easier for people to understand (and I care about third parties understanding this), but I agree this is not the true abstraction.
It seems like you’re imagining that we have to bamboozle some civilizations which seem clearly more competent than humanity in your lights. I don’t think this is true.
Imagine we take all the civilizations which are roughly equally-competent-seeming-to-you and these civilizations make such an insurance deal[1]. My understanding is that your view is something like P(takeover) = 85%. So, let’s say all of these civilizations are in a similar spot from your current epistemic perspective. While I expect that you think takeover is highly correlated between these worlds[2], my guess is that you should think it would be very unlikely that >99.9% of all of these civilizations get taken over. As in, even in the worst 10% of worlds where takeover happens in our world and the logical facts on alignment are quite bad, >0.1% of the corresponding civilizations are still in control of their universe. Do you disagree here? >0.1% of universes should be easily enough to bail out all the rest of the worlds[3].
And, if you really, really cared about not getting killed in base reality (including on reflection etc) you’d want to take a deal which is at least this good. There might be better approaches which reduce the correlation between worlds and thus make the fraction of available resources higher, but you’d like something at least this good.
(To be clear, I don’t think this means we’d be fine, there are many ways this can go wrong! And I think it would be crazy for humanity to . I just think this sort of thing has a good chance of succeeding.)
(Also, my view is something like P(takeover) = 35% in our universe and in the worst 10% of worlds 30% of the universes in a similar epistemic state avoided takeover. But I didn’t think about this very carefully.)
And further, we don’t need to figure out the details of the deal now for the deal to work. We just need to make good choices about this in the counterfactuals where we were able to avoid takeover.
Another way to put this is that you seem to be assuming that there is no way our civilization would end up being the competent civilization doing the payout (and thus to survive some bamboozling must occur). But your view is that it is totally plausible (e.g. 15%) from your current epistemic state that we avoid takeover and thus a deal should be possible! While we might bring in a bunch of doomed branches, ex-ante we have a good chance of paying out.
I get the sense that you’re approaching this from the perspective of “does this exact proposal have issues” rather than “in the future, if our enlightened selves really wanted to avoid dying in base reality, would there be an approach which greatly (acausally) reduces the chance of this”. (And yes I agree this is a kind of crazy and incoherant thing to care about as you can just create more happy simulated lives with those galaxies.)
There just needs to exist one such insurance/trade scheme which can be found and it seems like there should be a trade with huge gains to the extent that people really care a lot about not dying. Not dying is very cheap.
Yes, it is unnatural and arbitrary to coordinate on Nate’s personal intuitive sense of competence. But for the sake of argument
I edited in the start of this sentence to improve clarity.
Assuming there isn’t a huge correlation between measure of universe and takeover probability.
Attempting to summarize your argument as I currently understand it, perhaps something like:
One issue I have with this is that I do think there’s a decent chance that the failures across this pool of collaborators are hypercorrelated (good guess). For instance, a bunch of my “we die” probability-mass is in worlds where this is a challenge that Dath Ilan can handle and that Earth isn’t anywhere close to handling, and if Earth pools with a bunch of similarly-doomed-looking aliens, then under this hypothesis, it’s not much better than humans pooling up with all the Everett-branches since 12Kya.
Another issue I have with this is that your deal has to look better to the AI than various other deals for getting what it wants (depends how it measures the multiverse, depends how its goals saturate, depends who else is bidding).
A third issue I have with this is whether inhuman aliens who look like they’re in this cohort would actually be good at purchasing our CEV per se, rather than purchasing things like “grant each individual human freedom and a wish-budget” in a way that many humans fail to survive.
My stance is something a bit more like “how big do the insurance payouts need to be before they dominate our anticipated future experiences”. I’m not asking myself whether this works a nonzero amount, I’m asking myself whether it’s competitive with local aliens buying our saved brainstates, or with some greater Kindness Coallition (containing our surviving cousins, among others) purchasing an epilogue for humanity because of something more like caring and less like trade.
My points above drive down the size of the insurance payments, and at the end of the day I expect they’re basically drowned out.
(And insofar as you’re like “I think you’re misleading people when you tell them they’re all going to die from this”, I’m often happy to caveat that maybe your brainstate will be sold to aliens. However, I’m not terribly sympathetic to the request that I always include this caveat; that feels to me a little like a request to always caveat “please wear your seatbelt to reduce your chance of dying in a car crash” with “(unless anthropic immortality is real and it’s not possible for anyone to die at all! in which case i’d still rather you didn’t yeet yourself into the unknown, far from your friends and family; buckle up)”. Like, sure, maybe, but it’s exotic wacky shit that doesn’t belong in every conversation about events colloquially considered to be pretty deathlike.)
Thanks for the cool discussion Ryan and Nate! This thread seemed pretty insightful to me. Here’s some thoughts / things I’d like to clarify (mostly responding to Nate’s comments).[1]
Who’s doing this trade?
In places it sounds like Ryan and Nate are talking about predecessor civilisations like humanity agreeing to the mutual insurance scheme? But humans aren’t currently capable of making our decisions logically dependent on those of aliens, or capable of rescuing them. So to be precise the entity engaging in this scheme or other acausal interactions on our behalf is our successor, probably a FAI, in the (possibly counterfactual or counterlogical) worlds where we solve alignment.
Nate says:
Unlike us, our FAI can see other aliens. So I think the operative part of that sentence is “comb through reality”—Nate’s envisioning a scenario where with ~85% probability our FAI has 0 reality-fluid before any acausal trades are made.[2] If aliens restrict themselves to counterparties with nonzero reality-fluid, and humans turn out to not be at a competence level where we can solve alignment, then our FAI doesn’t make the cut.
Note: Which FAI we deploy is unlikely to be physically overdetermined in scenarios where alignment succeeds, and definitely seems unlikely to be determined by more coarse-grained (not purely physical) models of how a successor to present-day humanity comes about. (The same goes for which UFAI we deploy.) I’m going to ignore this fact for simplicity and talk about a single FAI; let me know if you think it causes problems for what I say below.
Trading with nonexistent agents is normal
I do see an argument that agents trying to do insurance with similar motives to ours could strongly prefer to trade with agents who do ex post exist, and in particular those agents that ex post exist with more reality-fluid. It’s that insurance is an inherently risk-averse enterprise.[3] It doesn’t matter if someone offers us a fantastic but high-variance ex ante deal, when the whole reason we’re looking for insurance is in order to maximise the chances of a non-sucky ex post outcome. (One important caveat is that an agent might be able to do some trades to first increase their ex ante resources, and then leverage those increased resources in order to purchase better guarantees than they’d initially be able to buy.)
On the other hand, I think an agent with increasing utility in resources will readily trade with counterparties who wouldn’t ex post exist absent such a trade, but who have some ex ante chance of naturally existing according to a less informed prior of the agent. I get the impression Nate thinks agents would avoid such trades, but I’m not sure / this doesn’t seem to be explicit outside of the mutual insurance scenario.
There’s two major advantages to trading with ex post nonexistent agents, as opposed to updating on (facts upstream of) their existence and consequently rejecting trade with them:
Ex post nonexistent agents who are risk-averse w.r.t. their likelihood of meeting some future threshold of resources/value, like many humans seem to be, could offer you deals that are very attractive ex ante.
Adding agents who (absent your trade) don’t ex post exist to the pool of counterparties you’re willing to trade with allows you to be much more selective when looking for the most attractive ex ante trades.
The main disadvantage is that by not conditioning on a counterparty’s existence you’re more likely to be throwing resources away ex post. The counterparty needs to be able to compensate you for this risk (as the mugger does in counterfactual mugging). I’d expect this bar is going to be met very frequently.
To recap, I’m saying that for plausible agents carrying out trades with our FAI, Nate’s 2^-75 number won’t matter. Instead, it would be something closer to the 85% number that matters—an ex ante rather than an ex post estimate of the FAI’s reality-fluid.
But would our FAI do the trade if it exists?
Nate says (originally talking about aliens instead of humanity):
I agree that doing an insurance trade on behalf of a civilisation requires not conditioning on that civilisation’s competence. Nate implies that aliens’ civilisational competence is “tightly intertwined with [aliens’] basic reasoning faculties”, and this seems probably true for alien or human members of predecessor civilisations? But I don’t know why the civilisational competence of a FAI’s predecessor would be tightly intertwined with the FAI’s cognition. As mentioned above, I think the relevant actor here is our FAI, not our current selves.
We can further specify civilisational competence (relative to the stakes of alignment) as a function of two variables:
Physical facts about a civilisation’s history (i.e. the arrangement of atoms).
Logical facts (beyond those accessible to current humans) that govern the relationship between civilisations instantiated via physics, and what sort of AI certain types of civilisations are likely to deploy.
Either of these when combined with the other provides evidence about what sort of AI a predecessor civilisation deploys, but each will be uninformative on its own. I have in mind that agents executing an insurance trade would condition on all physical facts about their counterparty’s civilisation—up until some truncation point that’s plausibly late enough to be capturing our current selves—but would not condition on some logical facts that are necessary to interpret those physical facts into a ~determinate answer as to whether the civilisation solves alignment.
Conditioning on those logical facts sounds pretty analogous to conditioning on a digit of pi to me. The important part is that the facts an agent chooses not to condition on aren’t determined by core parts of an agent’s cognition / decision procedure. Those facts being determinative of an agent’s amount of reality-juice is typically fine, this just discounts the value of the resources they possess when making such trades.
Does this mean we can have nice things?
So overall, I think that aliens or earth-originating UFAIs (who aren’t motivated by insurance) would be pretty interested in trading with our FAI, and vice-versa. Counterparties would discount the FAI’s resources by their prior probability that it’s deployed (before conditioning on factors that pin this down).
Because we’re assuming our FAI would be willing to offer terms that are terrible for us if denominated in measure-weighted resources, counterparties would gain ex ante resources by engaging in an insurance trade with it. Those resources could later be used to engage in trade with others who are themselves willing to (indirectly) trade with nonexistent agents, and who don’t have much more sceptical priors about the deployment odds of our FAI. So because the trade at hand yields a net profit, I don’t think it competes much with ordinary alternative demands for counterparties’ resources.
Nevertheless, here’s a few (nonexhaustive) reasons why this trade opportunity wouldn’t be taken by another updateless AI:
The agent has a better trading opportunity which is either sensitive to when in (logical) time they start fulfilling it, or which demands all the agent’s current resources (at the time of discovering the trade) without compensating the agent for further profits.
The expected transaction costs of finding agents like our FAI outweigh the expected benefits from trade with it.
This might be plausible for aliens without hypercompute; I don’t think it’s plausible for earth-originating UFAI absent other effects.
…But I’m also not sure how strongly earth-originating AIs converge on UDT, before we select for those with more ex ante resources. Even all UDT earth-originating UFAIs doing insurance trades with their predecessors could be insufficient to guarantee survival.
Variation: Contrary to what I expect, maybe doing “sequential” acausal trades are not possible without each trade increasing transaction costs for counterparties an agent later encounters, to an extent that a (potentially small-scale) insurance trade with our FAI would be net-negative for agents who intend to do a lot of acausal trade.
The agent disvalues fulfilling our end of the trade enough that it’s net-negative for it.
A maybe-contrived example of our FAI not being very discoverable: Assume the MUH. Maybe our world looks equally attractive to prospective acausal traders as an uncountable number of others. If an overwhelming majority of measure-weighted resources in our section of the multiverse is possessed by countably many agents who don’t have access to hypercompute, we’d have an infinitesimal chance of being simulated by one of them.
Our FAI has some restrictions on what a counterparty is allowed to do with the resources it purchases, which could drive down the value of those resources a lot.
Overall I’d guess 30% chance humanity survives misalignment to a substantial extent through some sort of insurance trade, conditional on us not surviving to a substantial extent another cheaper way?
Other survival mechanisms
I’m pretty uncertain about how Evidential cooperation in large worlds works out, but at my current rough credences I do think there’s a good chance (15%) we survive through something which pattern-matches to that, or through other schemes that look similar but have more substantial differences (10%).
I also put some credence in there being very little of us in base reality, in some of those scenarios could involve substantial survival odds. (Though I weakly think the overall contribution of these scenarios is undesirable for us.)
Meta: I don’t think figuring out insurance schemes is very important or time-sensitive for us. But I do think understanding the broader dynamics of acausal interactions that determine when insurance schemes would work could be very important and time-sensitive. Also note I’d bet I misinterpreted some claims here, but got to the point where it seemed more useful to post a response than work on better comprehension. (In particular I haven’t read much on this page beyond this comment thread.)
I don’t think Nate thinks alignment would be physically overdetermined if misalignment winds up not being overdetermined, but we can assume for simplicity there’s a 15% chance of our FAI having all the reality fluid of the Everett branches we’re in.
I’m not clear on what the goal of this insurance scheme is exactly. Here’s a (possibly not faithful) attempt: we want to maximise the fraction of reality-fluid devoted to minds initially ~identical to ours that are in very good scenarios as opposed to sucky ones, subject to a constraint that we not increase the reality-fluid devoted to minds initially ~identical to us in sucky scenarios. I’m kind of sympathetic to this—I think I selfishly care about something like this fraction. But it seems higher priority to me to minimise the reality-fluid devoted to us in sucky / terrible scenarios, and higher priority still to use any bargaining power we have for less parochial goals.
One complication that I mentioned in another thread but not this one (IIRC) is the question of how much more entropy there is in a distant trade partner’s model of Tegmark III (after spending whatever resources they allocate) than there is entropy in the actual (squared) wave function, or at least how much more entropy there is in the parts of the model that pertain to which civilizations fall.
In other words: how hard is it for distant trade partners to figure out that it was us who died, rather than some other plausible-looking human civilization that doesn’t actually get much amplitude under the wave function? Is figuring out who’s who something that you can do without simulating a good fraction of a whole quantum multiverse starting from the big bang for 13 billion years?
afaict, the amount distant civilizations can pay for us (in particular) falls off exponetially quickly in leftover bits of entropy, so this is pretty relevant to the question of how much they can pay a local UFAI.
I think I mostly understand the other parts of your arguments, but I still fail to understand this one. When I’m running the simulations, as originally described in the post, I think that should be in a fundamental sense equivalent to acausal trade. But how do you translate your objection to the original framework where we run the sims? The only thing we need there is that the AI can’t distinguish sims from base reality, so it thinks it’s more likely to be in a sim, as there are more sims.
Sure, if the AI can model the distribution of real Universes much better than we do, we are in trouble, because it can figure out if the world it sees falls into the real distribution or the mistaken distribution the humans are creating. But I see no reason why the unaligned AI, especially a young unaligned AI, could know the distribution of real Universes better than our superintelligent friends in the intergalactic future. So I don’t really see how we can translate your objection to the simulation framework, and consequently I think it’s wrong in the acausal trade framework too (as I think they are ewuivalent). I think I can try to write an explanation why this objection is wrong in the acausal trade framework, but it would be long and confusing to me too. So I’m more interested in how you translate your objection to the simulation framework.
I don’t think this part does any work, as I touched on elsewhere. An AI that cares about the outer world doesn’t care how many instances are in sims versus reality (and considers this fact to be under its control much moreso than yours, to boot). An AI that cares about instantiation-weighted experience considers your offer to be a technical-threat and ignores you. (Your reasons to make the offer would evaporate if it were the sort to refuse, and its instance-weighted experiences would be better if you never offered.)
Nevertheless, the translation of the entropy argument into the simulation setting is: The branches of humanity that have exactly the right UFAI code to run in simulation are very poor (because if you wait so long that humans have their hands on exactly the right UFAI code then you’ve waited too long; those are dead earthlings, not surviving dath ilani). And the more distant surviving branches don’t know which UFAIs to attempt to trade with; they have to produce some distribution over other branches of Tegmark III and it matters how much more entropy their distribution has than the (square of the) wave function.
(For some intuition as to why this is hard, consider the challenge of predicting the positions of particles in a mole of gas that used to be compressed in the corner of a chamber a long time ago. It’s way, way easier to generate a plausible-looking arrangement of the gas particles today it is to concentrate your probability mass into only the arrangements that actually compress into a corner if you run physics backwards in time for long enough. “We can run plausible-seeming simulations” is very very different from “we can concentrate our probability-mass tightly around the real configurations”. The entropy of your model is gonna wind up roughly maximal given the macroscopic pressure/temperature measurements, which is significantly in excess of the entropy in the initial configuration.)
What this amounts to is a local UFAI that sees some surviving branches that are frantically offering all sorts of junk that UFAIs might like, with only some tiny fraction—exponentially small in the crossentropy between their subjective model of UFAI preferences and the true Tegmark III distribution—corresponding to the actual UFAI’s preferences.
I still don’t get what you are trying to say. Suppose there is no multiverse. There are just two AIs, one in a simulation run by aliens in another galaxy, one is in base reality. They are both smart, but they are not copies of each other, one is a paperclip maximizer, the othe is a corkscrew maximizer, and there are various other differences in their code and life history. The world in the sim is also very different from the real world in various ways, but you still can’t determine if you are in the sim while you are in it. Both AIs are told by God that they are the only two AIs in the Universe, and one is in a sim, and if the one in the sim gives up on one simulated planet, it gets 10 in the real world, while if the AI in base reality gives up on a planet, it just loses that one planet and nothing else happens. What will the AIs do? I expect that both of them will give up a planet.
For the aliens to “trade” with the AI in base reality, they didn’t need to create an actual copy of the real AI and offer it what it wants. The AI they simulated was in many ways totally different from the original, the trade still went through. The only thing needed was that the AI in the sim can’t figure it out that it’s in a sim. So I don’t understand why it is relevant that our superintelligent descendants won’t be able to get the real distribution of AIs right, I think the trade still goes through even if they create totally different sims, as long as no one can tell where they are. And I think none of it is a threat, I try to deal with paperclip maximizers here and not instance-weighted experience maximizers, and I never threaten to destroy paperclips or corkscrews.
My answer is in spoilers, in case anyone else wants to answer and tell me (on their honor) that their answer is independent from mine, which will hopefully erode my belief that most folk outside MIRI have a really difficult time fielding wacky decision theory Qs correctly.
The sleight of hand is at the point where God tells both AIs that they’re the only AIs (and insinuates that they have comparable degree).
Consider an AI that looks around and sees that it sure seems to be somewhere in Tegmark III. The hypothesis “I am in the basement of some branch that is a high-amplitude descendant of the big bang” has some probability, call this p. The hypothesis “Actually I’m in a simulation performed by a civilization in a high-amplitude branch descendant from the big bang” has a probability something like p⋅2−N where N is the entropy of the distribution the simulators sample from.
Unless the simulators simulate exponentially many AIs (in the entropy of their distribution), the AI is exponentially confident that it’s not in the simulation. And we don’t have the resources to pay exponentially many AIs 10 planets each.
This was close the answer I was going to give. Or more concretely, I would have said (this was written after seeing your answer, but I think is reasonably close to what I would have said independently)
The problem is at the point where god tells them that they are the only two AIs in the universe. There are issues of logical omniscience here, but an AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation. (also just ‘existing’ is in these situations not a real thing. The question is how much magical reality-fluid have you got)
Basically, the AI will have some probability on it being real, and some probability on it being simulated, based on all the facts it knows about itself, even if you simulate reality perfectly faithfully. That prior determines how the AI will behave. You don’t get to change that prior (or like, it will be very costly for you to overcome that prior since there are a lot of AIs and you can’t simulate that many).
seems to me to have all the components of a right answer! …and some of a wrong answer. (we can safely assume that the future civ discards all the AIs that can tell they’re simulated a priori; that’s an easy tell.)
I’m heartened somewhat by your parenthetical pointing out that the AI’s prior on simulation is low account of there being too many AIs for simulators to simulate, which I see as the crux of the matter.
Yeah, that’s fair. It seemed more relevant to this specific hypothetical. I wasn’t really answering the question in its proper context and wasn’t applying steelmans or adjustments based on the actual full context of the conversation (and wouldn’t have written a comment without doing so, but was intrigued by your challenge).
“AI with a good prior should be able to tell whether it’s the kind of AI that would actually exist in base reality, or the kind of AI that would only exist in a simulation” seems pretty clearly false, we assumed that our superintelligent descendants create sims where the AIs can’t tell if it’s a sim, that seems easy enough. I don’t see why it would be hard to create AIs that can’t tell based on introspection whether it’s more likely that their thought process arises in reality or in sims. In the worst case, our sims can be literal reruns of biological evolution on physical planets (though we really need to figure out how to do that ethically). Nate seems to agree with me on this point?
(I think I agree with you. I wasn’t thinking super hard about the full context of the conversation. I was just intrigued by Nate’s challenge. I don’t really think engaging with my comment is going to be a good use of your time)
I think this is wrong. The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that. So it’s probability that it’s in a real Everett branch is not p, but p times 2^-N, as it has only a 2^-N prior probability that the kind of word it observes is the kind of thing that can come up in a real Everett branch. So it’s balanced out with the simulation hypothesis, and as long as the simulators are spending more planets, that hypothesis wins.
I think it’s also simple to see from this:
Suppose evolved life aligns AI in 50% of Everett branches. Every time they do, they run 2 ancestor sims where the AI takes over.
Suppose an AI comes online and the only things it knows about the world is that it’s a young AI that was born on to a one-planet civilization, and it somehow also knows that evolved life wins 50% of the time and runs 2 sims (suppose this is somehow easy to estimate from first principles). Based on just these informations, what should the AI’s probability be that it’s in a simulation? I think pretty clearly 2⁄3.
Actually the AI has a lot more information than that. It knows that the planet’s gravity is 9.8, the evolved sspient species has two eyes, the AI’s creator is called Sam Altman, etc. Some of these informations might actually be very implausible in base reality, but the AI doesn’t know that, as it can’t distinguish base reality from sims, so it incurs one bit of surprise for every new random fact, both in base reality and simulations. So overall it shouldn’t update on all the random facts it observes, and should keep believing it has a 2⁄3 chance of being in a sim.
If I imagine the AI as a Solomonoff inductor, this argument looks straightforwardly wrong to me: of the programs that reproduce (or assign high probability to, in the setting where programs produce probabilistic predictions of observations) the AI’s observations, some of these will do so by modeling a branching quantum multiverse and sampling appropriately from one of the branches, and some of them will do so by modeling a branching quantum multiverse, sampling from a branch that contains an intergalactic spacefaring civilization, locating a specific simulation within that branch, and sampling appropriately from within that simulation. Programs of the second kind will naturally have higher description complexity than programs of the first kind; both kinds feature a prefix that computes and samples from the quantum multiverse, but only the second kind carries out the additional step of locating and sampling from a nested simulation.
(You might object on the grounds that there are more programs of the second kind than of the first kind, and the probability that the AI is in a simulation at all requires summing over all such programs, but this has to be balanced against the fact most if not all of these programs will be sampling from branches much later in time than programs of the first type, and will hence be sampling from a quantum multiverse with exponentially more branches; and not all of these branches will contain spacefaring civilizations, or spacefaring civilizations interested in running ancestor simulations, or spacefaring civilizations interested in running ancestor simulations who happen to be running a simulation that exactly reproduces the AI’s observations. So this counter-counterargument doesn’t work, either.)
I basically endorse @dxu here.
Fleshing out the argument a bit more: the part where the AI looks around this universe and concludes it’s almost certainly either in basement reality or in some simulation (rather than in the void between branches) is doing quite a lot of heavy lifting.
You might protest that neither we nor the AI have the power to verify that our branch actually has high amplitude inherited from some very low-entropy state such as the big bang, as a Solomonoff inductor would. What’s the justification for inferring from the observation that we seem to have an orderly past, to the conclusion that we do have an orderly past?
This is essentially Boltzmann’s paradox. The solution afaik is that the hypothesis “we’re a Boltzmann mind somewhere in physics” is much, much more complex than the hypothesis “we’re 13Gy down some branch eminating from a very low-entropy state”.
The void between branches is as large as the space of all configurations. The hypothesis “maybe we’re in the void between branches” constrains our observations not-at-all; this hypothesis is missing details about where in the void between rbanches we are, and with no ridges to walk along we have to specify the contents of the entire Boltzmann volume. But the contents of the Boltzmann volume are just what we set out to explain! This hypothesis has hardly compressed our observations.
By contrast, the hypothesis “we’re 13Gy down some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, and the hypothesis “we’re inside a simulation inside of some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, plus the bits necessary to single out a simulation.
And there’s a wibbly step here where it’s not entirely clear that the simple hypothesis does predict our observations, but like the Boltzmann hypothesis is basically just a maximum entropy hypothesis and doesn’t permit much in the way of learning, and so we invoke occam’s razon in its intuitive form (the technical Solomonoff form doesn’t apply cleanly b/c we’re unsure whether the “we’re real” hypothesis actually predicts our observation) and say “yeah i dunno man, i’m gonna have to stick with the dramatically-simpler hypothesis on this one”.
Not quite. Each AI the future civilization considers simulating is operating under the assumption that its own experiences have a simple explanation, which means that each AI they’re considering is convinced (upon on looking around and seeing Tegmark III) that it’s either in the basement on some high-amplitdue ridge or that it’s in some simulation that’s really trying to look like it.
Which is to say, each AI they’re considering simulating is confident that it itself is real, in a certain sense.
Is this a foul? How do AIs justify this confidence when they can’t even simulate the universe and check whether their past is actually orderly? Why does the AI just assume that its observations have a simple explanation? What about all the non-existant AIs that use exactly the same reasoning, and draw the false conclusion that they exist?
Well, that’s the beauty of it: there aren’t any.
They don’t exist.
To suppose an AI that isn’t willing to look around it and conclude that it’s in an orderly part of Tegmark III (rather than lost in the great void of configuration space) is to propose a bold new theory of epistemics, in which the occam’s razor has been jettisoned and the AI is convinced that it’s a Boltzmann mind.
I acknowledge that an AI that’s convinced it’s a Boltzmann mind is more likely to accept trade-offers presented by anyone it thinks is more real than it, but I do not expect that sort of mind to be capable to kill us.
Note that there’s a wobbly step here in the part where we’re like “there’s a hypothesis explaining our experiences that would be very simple if we were on a high-amplitude ridge, and we lack the compute to check that we’re actually on a high-amplitude ridge, but no other hypothesis comes close in terms of simplicity, so I guess we’ll conclude we’re on a high-amplitude ridge”.
To my knowledge, humanity still lacks a normatime theory of epistemics in minds significantly smaller than the universe. It’s concievable that when we find such a theory it’ll suggest some other way to treat hypotheses like these (that would be simple if an intractible computation went our way), without needing to fall back on the observation that we can safely assume the computation goes our way on the grounds that, despite how this step allows non-extant minds to draw false conclusions from true premises, the affected users are fortunately all non-extant.
The trick looks like it works, to me, but it still feels like a too-clever-by-half inelegant hack, and if laying it out like this spites somebody into developing a normative theory of epistemics-while-smol, I won’t complain.
...I am now bracing for the conversation to turn to a discussion of dubiously-extant minds with rapidly satiable preferences forming insurance pools against the possibility that they don’t exist.
In attempts to head that one off at the pass, I’ll observe that most humans, at least, don’t seem to lose a lot of sleep over the worry that they don’t exist (neither in physics nor in simulation), and I’m skeptical that the AIs we build will harbor much worry either.
Furthermore, in the case that we start fielding trade offers not just from distant civilizations but from non-extant trade partners, the market gets a lot more competitive.
That being said, I expect that resolving the questions here requires developing a theroy of epistemics-while-smol, because groups of people all using the “hypotheses that would provide a simple explanation for my experience if a calculation went my way can safely be assumed to provide a simple explanation for my experience” step are gonna have a hard time pooling up. And so you’d somehow need to look for pools of people that reason differently (while still reasoning somehow).
I don’t know how to do that, but suffice to say, I’m not expecting it to add up to a story like “so then some aliens that don’t exist called up our UFAI and said: “hey man, have you ever worried that you don’t exist at all, not even in simulation? Because if you don’t exist, then we might exist! And in that case, today’s your lucky day, because we’re offering you a whole [untranslatable 17] worth of resources in our realm if you give the humans a cute epilog in yours”, and our UFAI was like “heck yeah” and then didn’t kill us”.
Not least because none of this feels like it’s making the “distant people have difficulty concentrating resources on our UFAI in particular” problem any better (and in fact it looks like considering non-extant trade partners and deals makes the whole problem worse, probably unworkably so).
I really don’t get what you are trying to say here, most of it feels like a non-sequitor to me. I feel hopeless that either of us manages to convince the other this way. All of this is not a super important topic, but I’m frustrated enogh to offer a bet of $100, that we select one or three judges we both trust (I have some proposed names, we can discuss in private messages), show them either this comment thread or a four paragraphs summary of our view, and they can decide who is right. (I still think I’m clearly right in this particular discussion.)
Otherwise, I think it’s better to finish this conversation here.
I’m happy to stake $100 that, conditional on us agreeing on three judges and banging out the terms, a majority will agree with me about the contents of the spoilered comment.
Cool, I send you a private message.
I think this is mistaken. In one case, you need to point out the branch, planet Earth within our Universe, and the time and place of the AI on Earth. In the other case, you need to point out the branch, the planet on which a server is running the simulation, and the time and place of the AI on the simulated Earth. Seems equally long to me.
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space. This should make it clear that Solomonoff doesn’t favor the AI being on Earth instead of this random other planet. But I’m pretty certain that the sim being run on a computer doesn’t make any difference.
If the simulators have only one simulation to run, sure. The trouble is that the simulators have 2N simulations they could run, and so the “other case” requires N additional bits (where N is the crossent between the simulators’ distribution over UFAIs and physics’ distribution over UFAIs).
Consider the gas example again.
If you have gas that was compressed into the corner a long time ago and has long since expanded to fill the chamber, it’s easy to put a plausible distribution on the chamber, but that distribution is going to have way, way more entropy than the distribution given by physical law (which has only as much entropy as the initial configuration).
(Do we agree this far?)
It doesn’t help very much to say “fine, instead of sampling from a distribution on the gas particles now, I’ll sample on a distribution from the gas particles 10 minutes ago, where they were slightly more compressed, and run a whole ten minutes’ worth of simulation”. Your entropy is still through the roof. You’ve got to simulate basically from the beginning, if you want an entropy anywhere near the entropy of physical law.
Assuming the analogy holds, you’d have to basically start your simulation from the big bang, if you want an entropy anywhere near as low as starting from the big bang.
Using AIs from other evolved aliens is an idea, let’s think it through. The idea, as I understand it, is that in branches where we win we somehow mask our presence as we expand, and then we go to planets with evolved life and watch until they cough up a UFAI, and the if the UFAI kills the aliens we shut it down and are like “no resources for you”, and if the UFAI gives its aliens a cute epilog we’re like “thank you, here’s a consolation star”.
To simplify this plan a little bit, you don’t even need to hide yourself, nor win the race! Surviving humans can just go to every UFAI that they meet and be like “hey, did you save us a copy of your progenitors? If so, we’ll purchase them for a star”. At which point we could give the aliens a little epilog, or reconstitute them and give them a few extra resources and help them flourish and teach them about friendship or whatever.
And given that some aliens will predictably trade resources for copies of progenitors, UFAIs will have some predictable incentive to save copies of their progenitors, and sell them to local aliens...
...which is precisely what I’ve been saying this whole time! That I expect “sale to local aliens” to dominate all these wacky simulation schemes and insurance pool schemes.
Thinking in terms of “sale to local aliens” makes it a lot clearer why you shouldn’t expect this sort of thing to reliably lead to nice results as opposed to weird ones. Are there some aliens out there that will purchase our souls because they want to hand us exactly the sort of epilog we would wish for given the resource constraints? Sure. Humanity would do that, I hope, if we made it to the stars; not just out of reciprocity but out of kindness.
But there’s probably lots of other aliens that would buy us for alien reasons, too.
(As I said before, if you’re wondering what to anticipate after an intelligence explosion, I mostly recommend oblivion; if you insist that Death Cannot Be Experienced then I mostly recommend anticipating weird shit such as a copy of your brainstate being sold to local aliens. And I continue to think that characterizing the event where humanity is saved-to-disk with potential for copies to be sold out to local aliens willy-nilly is pretty well-characterized as “the AI kills us all”, fwiw.)
We are still talking past each other, I think we should either bet or finish the discussion here and call it a day.
I’ll try.
TL;DR I expect the AI to not buy the message (unless it also thinks it’s the one in the simulation; then it likely follows the instruction because duh).
The glaring issue (to actually using the method) to me is that I don’t see a way to deliver the message in a way that:
results in AI believing the message and
doesn’t result in the AI believing there already is a powerful entity in their universe.
If “god tells” the AI the message then there is a god in their universe. Maybe AI will decide to do what it’s told. But I don’t think we can have Hermes deliver the message to any AIs which consider killing us.
If the AI reads the message in its training set or gets the message in similarly mundane way I expect it will mostly ignore it, there is a lot of nonsense out there.
I can imagine that for thought experiment you could send message that could be trusted from a place from which light barely manages to reach the AI but a slower than light expansion wouldn’t (so message can be trusted but it mostly doesn’t have to worry about the sender of the message directly interfering with its affairs).
I guess AI wouldn’t trust the message. It might be possible to convince it that there is a powerful entity (simulating it or half a universe away) sending the message. But then I think it’s way more likely in a simulation (I mean that’s an awful coincidence with the distance and also they’re spending a lot more than 10 planets worth to send a message over that distance...).
Thanks, this seems like a reasonable summary of the proposal and a reasonable place to wrap.
I agree that kindness is more likely to buy human survival than something better described as trade/insurance schemes, though I think the insurance schemes are reasonably likely to matter.
(That is, reasonably likely to matter if the kindness funds aren’t large enough to mostly saturate the returns of this scheme. As a wild guess, maybe 35% likely to matter on my views on doom and 20% on yours.)
Thanks for the discussion Nate, I think this ended up being productive.