I agree that in real life the entropy argument is an argument in favor of it being actually pretty hard to fool a superintelligence into thinking it might be early in Tegmark III when it’s not (even if you yourself are a superintelligence, unless you’re doing a huge amount of intercepting its internal sanity checks (which puts significant strain on the trade possibilities and which flirts with being a technical-threat)). And I agree that if you can’t fool a superintelligence into thinking it might be early in Tegmark III when it’s not, then the purchasing power of simulators drops dramatically, except in cases where they’re trolling local aliens. (But the point seems basically moot, as ‘troll local aliens’ is still an option, and so afaict this does all essentially iron out to “maybe we’ll get sold to aliens”.)
So8res
Dávid graciously proposed a bet, and while we were attempting to bang out details, he convinced me of two points:
The entropy of the simulators’ distribution need not be more than the entropy of the (square of the) wave function in any relevant sense. Despite the fact that subjective entropy may be huge, physical entropy is still low (because the simulations happen on a high-amplitude ridge of the wave function, after all). Furthermore, in the limit, simulators could probably just keep an eye out for local evolved life forms in their domain and wait until one of them is about to launch a UFAI and use that as their “sample”. Local aliens don’t necessarily exist and your presence can’t necessarily be cheaply masked, but we could imagine worlds where both happen and that’s enough to carry the argument, as in this case the entropy of the simulator’s distribution is actually quite close to the physical entropy. Even in the case where the entropy of their distribution is quite large, so long as the simulators’ simulations are compelling, UFAIs should be willing to accept the simulators’ proffered trades (at least so long as there is no predictable-to-them difference in the values of AIs sampled from physics an sampled from the simulations), on the grounds that UFAIs on net wind up with control over a larger fraction of Tegmark III that way (and thus each individual UFAI winds up with more control in expectation, assuming it cannot find any way to distinguish which case it’s in).
This has not updated me away from my underlying point that this whole setup simplifies to the case of sale to local aliens[1][2], but I do concede that my “you’re in trouble if simulators can’t concentrate their probability-mass on real AIs” argument is irrelevant on the grounds of false antecedent (and that my guess in the comment was wrong), and that my “there’s a problem where simulators cannot concentrate their probability-mass into sufficiently real AI” argument was straightforwardly incorrect. (Thanks, Dávid, for the corrections.)
- ↩︎
I now think that the first half of the argument in the linked comment is wrong, though I still endorse the second half.
To see the simplification: note that the part where the simulators hide themselves from a local UFAI to make the scenario a “simulation” is not pulling weight. Instead of hiding and then paying the AI two stars if it gave one star to its progenitors, simulators could instead reveal ourselves and purchase its progenitors for 1 star and then give them a second star. Same result, less cruft (so long as this is predictably the sort of thing an alien might purchase, such that AIs save copies of their progenitors).
- ↩︎
Recapitulating some further discussion I had with Dávid in our private doc: once we’ve reduced the situation to “sale to local aliens” it’s easier to see why this is an argument to expect whatever future we get to be weird rather than nice. Are there some aliens out there that would purchase us and give us something nice out of a sense of reciprocity? Sure. But when humans are like “well, we’d purchase the aliens killed by other UFAIs and give them nice things and teach them the meaning of friendship”, this statement is not usually conditional on some clause like “if and only if, upon extrapolating what civilization they would have become if they hadn’t killed themselves, we see that they would have done the same for us (if we’d’ve done the same for them etc.)”, which sure makes it look like this impulse is coming out of a place of cosmopolitan value rather than of binding trade agreements, which sure makes it seem like alien whim is a pretty big contender relative to alien contracts.
Which is to say, I still think the “sale to local aliens” frame yields better-calibrated intuitions for who’s doing the purchasing, and for what purpose. Nevertheless, I concede that the share of aliens acting out of contractual obligation rather than according to whim is not vanishingly small, as my previous arguments erroneously implied.
- ↩︎
I’m happy to stake $100 that, conditional on us agreeing on three judges and banging out the terms, a majority will agree with me about the contents of the spoilered comment.
If the simulators have only one simulation to run, sure. The trouble is that the simulators have simulations they could run, and so the “other case” requires additional bits (where is the crossent between the simulators’ distribution over UFAIs and physics’ distribution over UFAIs).
If necessary, we can run let pgysical biological life emerge on the faraway planet and develop AI while we are observing them from space.
Consider the gas example again.
If you have gas that was compressed into the corner a long time ago and has long since expanded to fill the chamber, it’s easy to put a plausible distribution on the chamber, but that distribution is going to have way, way more entropy than the distribution given by physical law (which has only as much entropy as the initial configuration).
(Do we agree this far?)
It doesn’t help very much to say “fine, instead of sampling from a distribution on the gas particles now, I’ll sample on a distribution from the gas particles 10 minutes ago, where they were slightly more compressed, and run a whole ten minutes’ worth of simulation”. Your entropy is still through the roof. You’ve got to simulate basically from the beginning, if you want an entropy anywhere near the entropy of physical law.
Assuming the analogy holds, you’d have to basically start your simulation from the big bang, if you want an entropy anywhere near as low as starting from the big bang.
Using AIs from other evolved aliens is an idea, let’s think it through. The idea, as I understand it, is that in branches where we win we somehow mask our presence as we expand, and then we go to planets with evolved life and watch until they cough up a UFAI, and the if the UFAI kills the aliens we shut it down and are like “no resources for you”, and if the UFAI gives its aliens a cute epilog we’re like “thank you, here’s a consolation star”.
To simplify this plan a little bit, you don’t even need to hide yourself, nor win the race! Surviving humans can just go to every UFAI that they meet and be like “hey, did you save us a copy of your progenitors? If so, we’ll purchase them for a star”. At which point we could give the aliens a little epilog, or reconstitute them and give them a few extra resources and help them flourish and teach them about friendship or whatever.
And given that some aliens will predictably trade resources for copies of progenitors, UFAIs will have some predictable incentive to save copies of their progenitors, and sell them to local aliens...
...which is precisely what I’ve been saying this whole time! That I expect “sale to local aliens” to dominate all these wacky simulation schemes and insurance pool schemes.
Thinking in terms of “sale to local aliens” makes it a lot clearer why you shouldn’t expect this sort of thing to reliably lead to nice results as opposed to weird ones. Are there some aliens out there that will purchase our souls because they want to hand us exactly the sort of epilog we would wish for given the resource constraints? Sure. Humanity would do that, I hope, if we made it to the stars; not just out of reciprocity but out of kindness.
But there’s probably lots of other aliens that would buy us for alien reasons, too.
(As I said before, if you’re wondering what to anticipate after an intelligence explosion, I mostly recommend oblivion; if you insist that Death Cannot Be Experienced then I mostly recommend anticipating weird shit such as a copy of your brainstate being sold to local aliens. And I continue to think that characterizing the event where humanity is saved-to-disk with potential for copies to be sold out to local aliens willy-nilly is pretty well-characterized as “the AI kills us all”, fwiw.)
I basically endorse @dxu here.
Fleshing out the argument a bit more: the part where the AI looks around this universe and concludes it’s almost certainly either in basement reality or in some simulation (rather than in the void between branches) is doing quite a lot of heavy lifting.
You might protest that neither we nor the AI have the power to verify that our branch actually has high amplitude inherited from some very low-entropy state such as the big bang, as a Solomonoff inductor would. What’s the justification for inferring from the observation that we seem to have an orderly past, to the conclusion that we do have an orderly past?
This is essentially Boltzmann’s paradox. The solution afaik is that the hypothesis “we’re a Boltzmann mind somewhere in physics” is much, much more complex than the hypothesis “we’re 13Gy down some branch eminating from a very low-entropy state”.
The void between branches is as large as the space of all configurations. The hypothesis “maybe we’re in the void between branches” constrains our observations not-at-all; this hypothesis is missing details about where in the void between rbanches we are, and with no ridges to walk along we have to specify the contents of the entire Boltzmann volume. But the contents of the Boltzmann volume are just what we set out to explain! This hypothesis has hardly compressed our observations.
By contrast, the hypothesis “we’re 13Gy down some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, and the hypothesis “we’re inside a simulation inside of some ridge eminating from the big bang” is penalized only according to the number of bits it takes to specify a branch index, plus the bits necessary to single out a simulation.
And there’s a wibbly step here where it’s not entirely clear that the simple hypothesis does predict our observations, but like the Boltzmann hypothesis is basically just a maximum entropy hypothesis and doesn’t permit much in the way of learning, and so we invoke occam’s razon in its intuitive form (the technical Solomonoff form doesn’t apply cleanly b/c we’re unsure whether the “we’re real” hypothesis actually predicts our observation) and say “yeah i dunno man, i’m gonna have to stick with the dramatically-simpler hypothesis on this one”.
The AI has a similarly hard time to the simulators figuring out what’s a plausible configuration to arise from the big bang. Like the simulators have an entropy N distribution of possible AIs, the AI itself also has an entropy N distribution for that.
Not quite. Each AI the future civilization considers simulating is operating under the assumption that its own experiences have a simple explanation, which means that each AI they’re considering is convinced (upon on looking around and seeing Tegmark III) that it’s either in the basement on some high-amplitdue ridge or that it’s in some simulation that’s really trying to look like it.
Which is to say, each AI they’re considering simulating is confident that it itself is real, in a certain sense.
Is this a foul? How do AIs justify this confidence when they can’t even simulate the universe and check whether their past is actually orderly? Why does the AI just assume that its observations have a simple explanation? What about all the non-existant AIs that use exactly the same reasoning, and draw the false conclusion that they exist?
Well, that’s the beauty of it: there aren’t any.
They don’t exist.
To suppose an AI that isn’t willing to look around it and conclude that it’s in an orderly part of Tegmark III (rather than lost in the great void of configuration space) is to propose a bold new theory of epistemics, in which the occam’s razor has been jettisoned and the AI is convinced that it’s a Boltzmann mind.
I acknowledge that an AI that’s convinced it’s a Boltzmann mind is more likely to accept trade-offers presented by anyone it thinks is more real than it, but I do not expect that sort of mind to be capable to kill us.
Note that there’s a wobbly step here in the part where we’re like “there’s a hypothesis explaining our experiences that would be very simple if we were on a high-amplitude ridge, and we lack the compute to check that we’re actually on a high-amplitude ridge, but no other hypothesis comes close in terms of simplicity, so I guess we’ll conclude we’re on a high-amplitude ridge”.
To my knowledge, humanity still lacks a normatime theory of epistemics in minds significantly smaller than the universe. It’s concievable that when we find such a theory it’ll suggest some other way to treat hypotheses like these (that would be simple if an intractible computation went our way), without needing to fall back on the observation that we can safely assume the computation goes our way on the grounds that, despite how this step allows non-extant minds to draw false conclusions from true premises, the affected users are fortunately all non-extant.
The trick looks like it works, to me, but it still feels like a too-clever-by-half inelegant hack, and if laying it out like this spites somebody into developing a normative theory of epistemics-while-smol, I won’t complain.
...I am now bracing for the conversation to turn to a discussion of dubiously-extant minds with rapidly satiable preferences forming insurance pools against the possibility that they don’t exist.
In attempts to head that one off at the pass, I’ll observe that most humans, at least, don’t seem to lose a lot of sleep over the worry that they don’t exist (neither in physics nor in simulation), and I’m skeptical that the AIs we build will harbor much worry either.
Furthermore, in the case that we start fielding trade offers not just from distant civilizations but from non-extant trade partners, the market gets a lot more competitive.
That being said, I expect that resolving the questions here requires developing a theroy of epistemics-while-smol, because groups of people all using the “hypotheses that would provide a simple explanation for my experience if a calculation went my way can safely be assumed to provide a simple explanation for my experience” step are gonna have a hard time pooling up. And so you’d somehow need to look for pools of people that reason differently (while still reasoning somehow).
I don’t know how to do that, but suffice to say, I’m not expecting it to add up to a story like “so then some aliens that don’t exist called up our UFAI and said: “hey man, have you ever worried that you don’t exist at all, not even in simulation? Because if you don’t exist, then we might exist! And in that case, today’s your lucky day, because we’re offering you a whole [untranslatable 17] worth of resources in our realm if you give the humans a cute epilog in yours”, and our UFAI was like “heck yeah” and then didn’t kill us”.
Not least because none of this feels like it’s making the “distant people have difficulty concentrating resources on our UFAI in particular” problem any better (and in fact it looks like considering non-extant trade partners and deals makes the whole problem worse, probably unworkably so).
seems to me to have all the components of a right answer! …and some of a wrong answer. (we can safely assume that the future civ discards all the AIs that can tell they’re simulated a priori; that’s an easy tell.)
I’m heartened somewhat by your parenthetical pointing out that the AI’s prior on simulation is low account of there being too many AIs for simulators to simulate, which I see as the crux of the matter.
My answer is in spoilers, in case anyone else wants to answer and tell me (on their honor) that their answer is independent from mine, which will hopefully erode my belief that most folk outside MIRI have a really difficult time fielding wacky decision theory Qs correctly.
The sleight of hand is at the point where God tells both AIs that they’re the only AIs (and insinuates that they have comparable degree).
Consider an AI that looks around and sees that it sure seems to be somewhere in Tegmark III. The hypothesis “I am in the basement of some branch that is a high-amplitude descendant of the big bang” has some probability, call this . The hypothesis “Actually I’m in a simulation performed by a civilization in a high-amplitude branch descendant from the big bang” has a probability something like where is the entropy of the distribution the simulators sample from.
Unless the simulators simulate exponentially many AIs (in the entropy of their distribution), the AI is exponentially confident that it’s not in the simulation. And we don’t have the resources to pay exponentially many AIs 10 planets each.
The only thing we need there is that the AI can’t distinguish sims from base reality, so it thinks it’s more likely to be in a sim, as there are more sims.
I don’t think this part does any work, as I touched on elsewhere. An AI that cares about the outer world doesn’t care how many instances are in sims versus reality (and considers this fact to be under its control much moreso than yours, to boot). An AI that cares about instantiation-weighted experience considers your offer to be a technical-threat and ignores you. (Your reasons to make the offer would evaporate if it were the sort to refuse, and its instance-weighted experiences would be better if you never offered.)
Nevertheless, the translation of the entropy argument into the simulation setting is: The branches of humanity that have exactly the right UFAI code to run in simulation are very poor (because if you wait so long that humans have their hands on exactly the right UFAI code then you’ve waited too long; those are dead earthlings, not surviving dath ilani). And the more distant surviving branches don’t know which UFAIs to attempt to trade with; they have to produce some distribution over other branches of Tegmark III and it matters how much more entropy their distribution has than the (square of the) wave function.
(For some intuition as to why this is hard, consider the challenge of predicting the positions of particles in a mole of gas that used to be compressed in the corner of a chamber a long time ago. It’s way, way easier to generate a plausible-looking arrangement of the gas particles today it is to concentrate your probability mass into only the arrangements that actually compress into a corner if you run physics backwards in time for long enough. “We can run plausible-seeming simulations” is very very different from “we can concentrate our probability-mass tightly around the real configurations”. The entropy of your model is gonna wind up roughly maximal given the macroscopic pressure/temperature measurements, which is significantly in excess of the entropy in the initial configuration.)
What this amounts to is a local UFAI that sees some surviving branches that are frantically offering all sorts of junk that UFAIs might like, with only some tiny fraction—exponentially small in the crossentropy between their subjective model of UFAI preferences and the true Tegmark III distribution—corresponding to the actual UFAI’s preferences.
One complication that I mentioned in another thread but not this one (IIRC) is the question of how much more entropy there is in a distant trade partner’s model of Tegmark III (after spending whatever resources they allocate) than there is entropy in the actual (squared) wave function, or at least how much more entropy there is in the parts of the model that pertain to which civilizations fall.
In other words: how hard is it for distant trade partners to figure out that it was us who died, rather than some other plausible-looking human civilization that doesn’t actually get much amplitude under the wave function? Is figuring out who’s who something that you can do without simulating a good fraction of a whole quantum multiverse starting from the big bang for 13 billion years?
afaict, the amount distant civilizations can pay for us (in particular) falls off exponetially quickly in leftover bits of entropy, so this is pretty relevant to the question of how much they can pay a local UFAI.
Starting from now? I agree that that’s true in some worlds that I consider plausible, at least, and I agree that worlds whose survival-probabilities are sensitive to my choices are the ones that render my choices meaningful (regardless of how determinisic they are).
Conditional on Earth being utterly doomed, are we (today) fewer than 75 qbitflips from being in a good state? I’m not sure, it probably varies across the doomed worlds where I have decent amounts of subjective probability. It depends how much time we have on the clock, depends where the points of no-return are. I haven’t thought about this a ton. My best guess is it would take more than 75 qbitflips to save us now, but maybe I’m not thinking creatively enough about how to spend them, and I haven’t thought about it in detail and expect I’d be sensitive to argument about it /shrug.
(If you start from 50 years ago? Very likely! 75 bits is a lot of population rerolls. If you start after people hear the thunder of the self-replicating factories barrelling towards them, and wait until the very last moments that they would consider becoming a distinct person who is about to die from AI, and who wishes to draw upon your reassurance that they will be saved? Very likely not! Those people look very, very dead.)
One possible point of miscommunication is that when I said something like “obviously it’s worse than 2^-75 at the extreme where it’s actually them who is supposed to survive” was intended to apply to the sort of person who has seen the skies darken and has heard the thunder, rather than the version of them that exists here in 2024. This was not intended to be some bold or suprising claim. It was an attempt to establish an obvious basepoint at one very extreme end of a spectrum, that we could start interpolating from (asking questions like “how far back from there are the points of no return?” and “how much more entropy would they have than god, if people from that branchpoint spent stars trying to figure out what happened after those points?”).
(The 2^-75 was not intended to be even an esitmate of how dead the people on the one end of the extreme are. It is the “can you buy a star” threshold. I was trying to say something like “the individuals who actually die obviously can’t buy themselves a star just because they inhabit Tegmark III, now let’s drag the cursor backwards and talk about whether, at any point, we cross the a-star-for-everyone threshold”.)
If that doesn’t clear things up and you really want to argue that, conditional on Earth being as doomed as it superficially looks to me, most of those worlds are obviously <100 quantum bitflips from victory today, I’m willing to field those arguments; maybe you see some clever use of qbitflips I don’t and that would be kinda cool. But I caveat that this doesn’t seem like a crux to me and that I acknowledge that the other worlds (where Earth merely looks unsavlageable) are the ones motivating action.
What are you trying to argue? (I don’t currently know what position y’all think I have or what position you’re arguing for. Taking a shot in the dark: I agree that quantum bitflips have loads more influence on the outcome the earlier in time they are.)
You often claim that conditional on us failing in alignment, alignment was so unlikely that among branches that had roughyly the same people (genetically) during the Singularity, only 2^-75 survives.
My first claim is not “fewer than 1 in 2^75 of the possible configurations of human populations navigate the problem successfully”.
My first claim is more like “given a population of humans that doesn’t even come close to navigating the problem successfully (given some unoptimized configuration of the background particles), probably you’d need to spend quite a lot of bits of optimization to tune the butterfly-effects in the background particles to make that same population instead solve alignment (depending how far back in time you go).” (A very rough rule of thumb here might be “it should take about as many bits as it takes to specify an FAI (relative to what they know)”.)
This is especially stark if you’re trying to find a branch of reality that survives with the “same people” on it. Humans seem to be very, very sensitive about what counts as the “same people”. (e.g., in August, when gambling on who gets a treat, I observed a friend toss a quantum coin, see it come up against them, and mourn that a different person—not them—would get to eat the treat.)
(Insofar as y’all are trying to argue “those MIRI folk say that AI will kill you, but actually, a person somewhere else in the great quantum multiverse, who has the same genes and childhood as you but whose path split off many years ago, will wake up in a simulation chamber and be told that they were rescued by the charity of aliens! So it’s not like you’ll really die”, then I at least concede that that’s an easier case to make, although it doesn’t feel like a very honest presentation to me.)
Conditional on observing a given population of humans coming nowhere close to solving the problem, the branches wherein those humans live (with identity measured according to the humans) are probably very extremely narrow compared to the versions where they die. My top guess would be that 2^-75 number is a vast overestimate of how thick those branches are (and the 75 in the exponent does not come from any attempt of mine to make that estimate).
As I said earlier: you can take branches that branched off earlier and earlier in time, and they’ll get better and better odds. (Probably pretty drastically, as you back off past certain points of no return. I dunno where the points of no return are. Weeks? Months? Years? Not decades, because with decades you can reroll significant portions of the population.)
I haven’t thought much about what fraction of populations I’d expect to survive off of what branch-point. (How many bits of optimization do you need back in the 1880s to swap Hitler out for some charismatic science-enthusiast statesman that will happen to have exactly the right infulence on the following culture? How many such routes are there? I have no idea.)
Three big (related) issues with hoping that forks branced off sufficiently early (who are more numerous) save us in particular (rather than other branches) are (a) they plausibly care more about populations nearer to them (e.g. versions of themselves that almost died); (b) insofar as they care about more distant populations (that e.g. include you), they have rather a lot of distant populations to attempt to save; and (c) they have trouble distinguishing populations that never were, from populations that were and then weren’t.
Point (c) might be a key part of the story, not previously articulated (that I recall), that you were missing?
Like, you might say “well, if one in a billion branches look like dath ilan and the rest look like earth, and the former basically all survive and the latter basically all die, then the fact that the earthlike branches have ~0 ability to save their earthlike kin doesn’t matter, so long as the dath-ilan like branches are trying to save everyone. dath ilan can just flip 30 quantum coins to select a single civilization from among the billion that died, and then spend 1/million resources on simulating that civilization (or paying off their murderer or whatever), and that still leaves us with one-in-a-quintillion fraction of the universe, which is enough to keep the lights running”.
Part of the issue with this is that dath ilan cannot simply sample from the space of dead civilizations; it has to sample from a space of plausible dead civilizations rather than actual dead civilizations, in a way that I expect to smear loads and loads of probability-mass over regions that had concentrated (but complex) patterns of amplitude. The concentrations of Everett branches are like a bunch of wiggly thin curves etched all over a disk, and it’s not too hard to sample uniformly from the disk (and draw a plausible curve that the point could have been on), but it’s much harder to sample only from the curves. (Or, at least, so the physics looks to me. And this seems like a common phenomenon in physics. c.f. the apparent inevitable increase of entropy when what’s actually happening is a previously-compact volume in phase space evolving int oa bunch of wiggly thin curves, etc.)
So when you’re considering whether surviving humans will pay for our souls—not somebody’s souls, but our souls in particular—you have a question of how these alleged survivors came to pay for us in particular (rather than some other poor fools). And there’s a tradeoff that runs on one exrteme from “they’re saving us because they are almost exactly us and they remember us and wish us to have a nice epilog” all the way to “they’re some sort of distant cousins, branched off a really long time ago, who are trying to save everyone”.
The problem with being on the “they care about us because they consider they basically are us” end is that those people are dead to (conditional on us being dead). And as you push the branch-point earlier and earlier in time, you start finding more survivors, but those survivors also wind up having more and more fools to care about (in part because they have trouble distinguishing the real fallen civilizations from the neighboring civilization-configurations that don’t get appreciable quantum amplitude in basement physics).
If you tell me where on this tradeoff curve you want to be, we can talk about it. (Ryan seemed to want to look all the way on the “insurance pool with aliens” end of the spectrum.)
The point of the 2^75 number is that that’s about the threshold of “can you purchase a single star”. My guess is that, conditional on people dying, versions that they consider also them survive with degree way less than 2^-75, which rules out us being the ones who save us.
If we retreat to “distant cousin branches of humanity might save us”, there’s a separate question of how the width of the surviving quantum branch compares to the volume taken up by us in the space of civilizations they attempt to save. I think my top guess is that a distant branch of humanity, spending stellar-level resources in attempts to concentrate its probability-mass in accordance with how quantum physics concentrates (squared) amplitude, still winds up so uncertain that there’s still 50+ bits of freedom left over? Which means that if one-in-a-billion of our cousin-branches survives, they still can’t buy a star (unless I flubbed my math).
And I think it’s real, real easy for them to wind up with 1000 bits leftover, in which case their purchasing power is practically nothing.
(This actually seems like a super reasonable guess to me. Like, if you imagine knowing that a mole of gas was compressed into the corner of a box with known volume, and you then let the gas bounce around for 13 billion years and take some measurements of pressure and temperature, and then think long and hard using an amount of compute that’s appreciably less than the amount you’d need to just simulate the whole thing from the start. It seems to me like you wind up with a distribution that has way way more than 1000 bits more entropy than is contained in the underlying physics. Imagining that you can spend about 1 ten millionth of the universe on refining a distribution over Tegmark III with entropy that’s within 50 bits of god seems very very generous to me; I’m very uncertain about this stuff but I think that even mature superintelligences could easily wind up 1000 bits from god here.)
Regardless, as I mentioned elsewhere, I think that a more relevant question is how those trade-offers stack up to other trade-offers, so /shrug.
the “you can’t save us by flipping 75 bits” thing seems much more likely to me on a timescale of years than a timescale of decades; I’m fairly confident that quantum fluctuations can cause different people to be born, and so if you’re looking 50 years back you can reroll the population dice.
Summarizing my stance into a top-level comment (after some discussion, mostly with Ryan):
None of the “bamboozling” stuff seems to me to work, and I didn’t hear any defenses of it. (The simulation stuff doesn’t work on AIs that care about the universe beyond their senses, and sane AIs that care about instance-weighted experiences see your plan as a technical-threat and ignore it. If you require a particular sort of silly AI for your scheme to work, then the part that does the work is the part where you get that precise sort of sillyness stably into an AI.)
The part that is doing work seems to be “surviving branches of humanity could pay the UFAI not to kill us”.
I doubt surviving branches of humanity have much to pay us, in the case where we die; failure looks like it’ll correlate across branches.
Various locals seem to enjoy the amended proposal (not mentioned in the post afaik) that a broad cohort of aliens who went in with us on a UFAI insurance pool, would pay the UFAI we build not to kill us.
It looks to me like insurance premiums are high and that failures are correlated accross membres.
An intuition pump for thinking about the insurance pool (which I expect is controversial and am only just articulating): distant surviving members of our insurance pool might just run rescue simulations instead of using distant resources to pay a local AI to not kill us. (It saves on transaction fees, and it’s not clear it’s much harder to figure out exactly which civilization to save than it is to figure out exactly what to pay the UFAI that killed them.) Insofar as scattered distant rescue-simulations don’t feel particularly real or relevant to you, there’s a decent chance they don’t feel particularly real or relevant to the UFAI either. Don’t be shocked if the UFAI hears we have insurance and tosses quantum coins and only gives humanity an epilog in a fraction of the quantum multiverse so small that it feels about as real and relevant to your anticipations as the fact that you could always wake up in a rescue sim after getting in a car crash.
My best guess is that the contribution of the insurance pool towards what we experience next looks dwarfed by other contributions, such as sale to local aliens. (Comparable, perhaps, to how my anticipation if I got in a car crash would probably be less like “guess I’ll wake up in a rescue sim” and more like “guess I’ll wake up injured, if at all”.)
If you’re wondering what to anticipate after an intelligence explosion, my top suggestion is “oblivion”. It’s a dependable, tried-and-true anticipation following the sort of stuff I expect to happen.
If you insist that Death Cannot Be Experienced and ask what to anticipate anyway, it still looks to me like the correct answer is “some weird shit”. Not because there’s nobody out there that will pay to run a copy of you, but because there’s a lot of entities out there making bids, and your friends are few and far between among them (in the case where we flub alignment).
I was responding to David saying
Otherwise, I largely agree with your comment, except that I think that us deciding to pay if we win is entangled with/evidence for a general willingness to pay among the gods, and in that sense it’s partially “our” decision doing the work of saving us.
and was insinuating that we deserve extremely little credit for such a choice, in the same way that a child deserves extremely little credit for a fireman saving someone that the child could not (even if it’s true that the child and the fireman share some aspects of a decision procedure). My claim was intended less like agreement with David’s claim and more like reductio ad absurdum, with the degree of absurdity left slightly ambiguous.
(And on second thought, the analogy would perhaps have been tighter if the firefighter was saving the child.)
Attempting to summarize your argument as I currently understand it, perhaps something like:
Suppose humanity wants to be insured against death, and is willing to spend 1/million of its resources in worlds where it lives for 1/trillion of those resources in worlds where it would otherwise die.
It suffices, then, for humanity to be the sort of civilization that, if it matures, would comb through the multiverse looking for [other civilizations in this set], and find ones that died, and verify that they would have acted as follows if they’d survived, and then pay off the UFAIs that murdered them, using 1/million of their resources.
If even if 1/thousand such civilzations make it and the AI changes a factor of 1000 for the distance, transaction fees, and to sweeten the deal relative to any other competition, this still means that insofar as humanity would have become this sort of civilization, we should expect 1/trillion of the universe to be spent on us.
One issue I have with this is that I do think there’s a decent chance that the failures across this pool of collaborators are hypercorrelated (good guess). For instance, a bunch of my “we die” probability-mass is in worlds where this is a challenge that Dath Ilan can handle and that Earth isn’t anywhere close to handling, and if Earth pools with a bunch of similarly-doomed-looking aliens, then under this hypothesis, it’s not much better than humans pooling up with all the Everett-branches since 12Kya.
Another issue I have with this is that your deal has to look better to the AI than various other deals for getting what it wants (depends how it measures the multiverse, depends how its goals saturate, depends who else is bidding).
A third issue I have with this is whether inhuman aliens who look like they’re in this cohort would actually be good at purchasing our CEV per se, rather than purchasing things like “grant each individual human freedom and a wish-budget” in a way that many humans fail to survive.
I get the sense that you’re approaching this from the perspective of “does this exact proposal have issues” rather than “in the future, if our enlightened selves really wanted to avoid dying in base reality, would there be an approach which greatly (acausally) reduces the chance of this”.
My stance is something a bit more like “how big do the insurance payouts need to be before they dominate our anticipated future experiences”. I’m not asking myself whether this works a nonzero amount, I’m asking myself whether it’s competitive with local aliens buying our saved brainstates, or with some greater Kindness Coallition (containing our surviving cousins, among others) purchasing an epilogue for humanity because of something more like caring and less like trade.
My points above drive down the size of the insurance payments, and at the end of the day I expect they’re basically drowned out.
(And insofar as you’re like “I think you’re misleading people when you tell them they’re all going to die from this”, I’m often happy to caveat that maybe your brainstate will be sold to aliens. However, I’m not terribly sympathetic to the request that I always include this caveat; that feels to me a little like a request to always caveat “please wear your seatbelt to reduce your chance of dying in a car crash” with “(unless anthropic immortality is real and it’s not possible for anyone to die at all! in which case i’d still rather you didn’t yeet yourself into the unknown, far from your friends and family; buckle up)”. Like, sure, maybe, but it’s exotic wacky shit that doesn’t belong in every conversation about events colloquially considered to be pretty deathlike.)
What does degree of determination have to do with it? If you lived in a fully deterministic universe, and you were uncertain whether it was going to live or die, would you give up on it on the mere grounds that the answer is deterministic (despite your own uncertainty about which answer is physically determined)?
I think I’m confused why you work on AI safety then, if you believe the end-state is already 2^75 level overdetermined.
It’s probably physically overdetermined one way or another, but we’re not sure which way yet. We’re still unsure about things like “how sensitive is the population to argument” and “how sensibly do government respond if the population shifts”.
But this uncertainty—about which way things are overdetermined by the laws of physics—does not bear all that much relationship to the expected ratio of (squared) quantum amplitude between branches where we live and branches where we die. It just wouldn’t be that shocking for the ratio between those two sorts of branches to be on the order of 2^75; this would correspond to saying something like “it turns out we weren’t just a few epileptic seizures and a well-placed thunderstorm away from the other outcome”.
Background: I think there’s a common local misconception of logical decision theory that it has something to do with making “commitments” including while you “lack knowledge”. That’s not my view.
I pay the driver in Parfit’s hitchhiker not because I “committed to do so”, but because when I’m standing at the ATM and imagine not paying, I imagine dying in the desert. Because that’s what my counterfactuals say to imagine. To someone with a more broken method of evaluating counterfactuals, I might pseudo-justify my reasoning by saying “I am acting as you would have committed to act”. But I am not acting as I would have committed to act; I do not need a commitment mechanism; my counterfactuals just do the job properly no matter when or where I run them.
To be clear: I think there are probably competent civilizations out there who, after ascending, will carefully consider the places where their history could have been derailed, and carefully comb through the multiverse for entities that would be able to save those branches, and will pay thoes entities, not because they “made a commitment”, but because their counterfactuals don’t come with little labels saying “this branch is the real branch”. The multiverse they visualize in which the (thick) survivor branches pay a little to the (thin) derailed branches (leading to a world where everyone lives (albeit a bit poorer)), seems better to them than the multiverse they visualize in which no payments are made (and the derailed branches die, and the on-track branches are a bit richer), and so they pay.
There’s a question of what those competent civilizations think when they look at us, who are sitting here yelling “we can’t see you, and we don’t know how to condition our actions on whether you pay us or not, but as best we can tell we really do intend to pay off the AIs of random alien species—not the AIs that killed our brethren, because our brethren are just too totally dead and we’re too poor to save all but a tiny fraction of them, but really alien species, so alien that they might survive in such a large portion that their recompense will hopefully save a bigger fraction of our brethren”.
What’s the argument for the aliens taking that offer? As I understand it, the argument goes something like “your counterfactual picture of reality should include worlds in which your whole civilization turned out to be much much less competent, and so when you imagine the multiverse where you pay for all humanity to live, you should see that, in the parts of the multiverse where you’re totally utterly completely incompetent and too poor to save anything but a fraction of your own brethren, somebody else pays to save you”.
We can hopefully agree that this looks like a particularly poor insurance deal relative to the competing insurance deals.
For one thing, why not cut out the middleman and just randomly instantiate some civilization that died? (Are we working under the assumption that it’s much harder for the aliens to randomly instantiate you than to randomly instantiate the stuff humanity’s UFAI ends up valuing? What’s up with that?)
But even before that, there’s all sorts of other jucier looking opportunities. For example, suppose the competent civilization contains a small collection of rogues who they asses have a small probability of causing an uprising and launching an AI before it’s ready. They presumably have a pretty solid ability to figure out exactly what that AI would like and offer trades to it driectly, and that’s a much more appealing way to spend resources allocated to insurance. My guess is there’s loads and loads of options like that that eat up all the spare insurance budget, before our cries get noticed by anyone who cares for the sake of decision theory (rather than charity).
Perhaps this is what you meant by “maybe they prefer to make deals with beings more similar to them”; if so I misunderstood; the point is not that they have some familiarity bias but that beings closer to them make more compelling offers.
The above feels like it suffices, to me, but there’s still another part of the puzzle I feel I haven’t articulated.
Another piece of backgound: To state the obvious, we still don’t have a great account of logical updatelessness, and so attempts to discuss what it entails will be a bit fraut. Plowing ahead anyway:
The best option in a counterfactual mugging with a logical coin and a naive predictor is to calcuate the logical value of the coin flip and pay iff you’re counterfactual. (I could say more about what I mean by ‘naive’, but it basically just serves to render this statement true.) A predictor has to do a respectable amount of work to make it worth your while to pay in reality (when the coin comes up against you).
What sort of work? Well, one viewpoint on it (that sidesteps questions of “logically-impossible possible worlds” and what you’re supposed to do as you think further and realize that they’re impossible) is that the predictor isn’t so much demanding that you make your choice before you come across knowledge of some fact, so much as they’re offering to pay you if you render a decision that is logically independent from some fact. They don’t care whether you figure out the value of the coin, so long as you don’t base your decision on that knowledge. (There’s still a question of how exactly to look at someone’s reasoning and decide what logical facts it’s independent of, but I’ll sweep that under the rug.)
From this point of view, when people come to you and they’re like “I’ll pay you iff your reasoning doesn’t depend on X”, the proper response is to use some reasoning that doesn’t depend on X to decide whether the amount they’re paying you is more than VOI(X).
In cases where X is something like a late digit of pi, you might be fine (up to your ability to tell that the problem wasn’t cherry-picked). In cases where X is tightly intertwined with your basic reasoning faculties, you should probably tell them to piss off.
Someone who comes to you with an offer and says “this offer is void if you read the fine print or otherwise think about the offer too hard”, brings quite a bit of suspicion onto themselves.
With that in mind, it looks to me like the insurance policy on offer reads something like:
would you like to join the confederacy of civilizations that dedicate 1/million of their resource to pay off UFAIs?
cost: 1/million of your resources.
benefit: any UFAI you release that is amenable to trade will be paid off with 1/million * 1/X to allocate you however many resources that’s worth, where X is the fraction of people who take this deal and die (modulo whatever costs are needed to figure out which UFAIs belong to signatories and etc.)
caveat: this offer is only valid if your reasoning is logically independent from your civilizational competence level, and if your reasoning for accepting the proposal is not particularly skilled or adept
And… well this isn’t a knockdown argument, but that really doesn’t look like a very good deal to me. Like, maybe there’s some argument of the form “nobody in here is trying to fleece you because everyone in here is also stupid” but… man, I just don’t get the sense that it’s a “slam dunk”, when I look at it without thinking too hard about it and in a way that’s independent of how competent my civilization is.
Mostly I expect that everyone stooping to this deal is about as screwed as we are (namely: probably so screwed that they’re bringing vastly more doomed branches than saved ones, to the table) (or, well, nearly everyone weighted by whatever measure matters).
Roughly speaking, I suspect that the sort of civilizations that aren’t totally fucked can already see that “comb through reality for people who can see me and make their decisions logically dependent on mine” is a better use of insurance resources, by the time they even consider this policy. So when you plea of them to evaluate the policy in a fashion that’s logically independent from whether they’re smart enough to see that they have more foolproof options available, I think they correctly see us as failing to offer more than VOI(WeCanThinkCompetently) in return, because they are correctly suspicious that you’re trying to fleece them (which we kinda are; we’re kinda trying to wish ourselves into a healthier insurance-pool).
Which is to say, I don’t have a full account of how to be logically updateless yet, but I suspect that this “insurance deal” comes across like a contract with a clause saying “void if you try to read the fine print or think too hard about it”. And I think that competent civilizations are justifiably suspicious, and that they correctly believe they can find other better insurance deals if they think a bit harder and void this one.
I donated $25k. Thanks for doing what you do.