Summarizing my stance into a top-level comment (after some discussion, mostly with Ryan):
None of the “bamboozling” stuff seems to me to work, and I didn’t hear any defenses of it. (The simulation stuff doesn’t work on AIs that care about the universe beyond their senses, and sane AIs that care about instance-weighted experiences see your plan as a technical-threat and ignore it. If you require a particular sort of silly AI for your scheme to work, then the part that does the work is the part where you get that precise sort of sillyness stably into an AI.)
The part that is doing work seems to be “surviving branches of humanity could pay the UFAI not to kill us”.
I doubt surviving branches of humanity have much to pay us, in the case where we die; failure looks like it’ll correlate across branches.
Various locals seem to enjoy the amended proposal (not mentioned in the post afaik) that a broad cohort of aliens who went in with us on a UFAI insurance pool, would pay the UFAI we build not to kill us.
It looks to me like insurance premiums are high and that failures are correlated accross membres.
An intuition pump for thinking about the insurance pool (which I expect is controversial and am only just articulating): distant surviving members of our insurance pool might just run rescue simulations instead of using distant resources to pay a local AI to not kill us. (It saves on transaction fees, and it’s not clear it’s much harder to figure out exactly which civilization to save than it is to figure out exactly what to pay the UFAI that killed them.) Insofar as scattered distant rescue-simulations don’t feel particularly real or relevant to you, there’s a decent chance they don’t feel particularly real or relevant to the UFAI either. Don’t be shocked if the UFAI hears we have insurance and tosses quantum coins and only gives humanity an epilog in a fraction of the quantum multiverse so small that it feels about as real and relevant to your anticipations as the fact that you could always wake up in a rescue sim after getting in a car crash.
My best guess is that the contribution of the insurance pool towards what we experience next looks dwarfed by other contributions, such as sale to local aliens. (Comparable, perhaps, to how my anticipation if I got in a car crash would probably be less like “guess I’ll wake up in a rescue sim” and more like “guess I’ll wake up injured, if at all”.)
If you’re wondering what to anticipate after an intelligence explosion, my top suggestion is “oblivion”. It’s a dependable, tried-and-true anticipation following the sort of stuff I expect to happen.
If you insist that Death Cannot Be Experienced and ask what to anticipate anyway, it still looks to me like the correct answer is “some weird shit”. Not because there’s nobody out there that will pay to run a copy of you, but because there’s a lot of entities out there making bids, and your friends are few and far between among them (in the case where we flub alignment).
Summarizing my stance into a top-level comment (after some discussion, mostly with Ryan):
None of the “bamboozling” stuff seems to me to work, and I didn’t hear any defenses of it. (The simulation stuff doesn’t work on AIs that care about the universe beyond their senses, and sane AIs that care about instance-weighted experiences see your plan as a technical-threat and ignore it. If you require a particular sort of silly AI for your scheme to work, then the part that does the work is the part where you get that precise sort of sillyness stably into an AI.)
The part that is doing work seems to be “surviving branches of humanity could pay the UFAI not to kill us”.
I doubt surviving branches of humanity have much to pay us, in the case where we die; failure looks like it’ll correlate across branches.
Various locals seem to enjoy the amended proposal (not mentioned in the post afaik) that a broad cohort of aliens who went in with us on a UFAI insurance pool, would pay the UFAI we build not to kill us.
It looks to me like insurance premiums are high and that failures are correlated accross membres.
An intuition pump for thinking about the insurance pool (which I expect is controversial and am only just articulating): distant surviving members of our insurance pool might just run rescue simulations instead of using distant resources to pay a local AI to not kill us. (It saves on transaction fees, and it’s not clear it’s much harder to figure out exactly which civilization to save than it is to figure out exactly what to pay the UFAI that killed them.) Insofar as scattered distant rescue-simulations don’t feel particularly real or relevant to you, there’s a decent chance they don’t feel particularly real or relevant to the UFAI either. Don’t be shocked if the UFAI hears we have insurance and tosses quantum coins and only gives humanity an epilog in a fraction of the quantum multiverse so small that it feels about as real and relevant to your anticipations as the fact that you could always wake up in a rescue sim after getting in a car crash.
My best guess is that the contribution of the insurance pool towards what we experience next looks dwarfed by other contributions, such as sale to local aliens. (Comparable, perhaps, to how my anticipation if I got in a car crash would probably be less like “guess I’ll wake up in a rescue sim” and more like “guess I’ll wake up injured, if at all”.)
If you’re wondering what to anticipate after an intelligence explosion, my top suggestion is “oblivion”. It’s a dependable, tried-and-true anticipation following the sort of stuff I expect to happen.
If you insist that Death Cannot Be Experienced and ask what to anticipate anyway, it still looks to me like the correct answer is “some weird shit”. Not because there’s nobody out there that will pay to run a copy of you, but because there’s a lot of entities out there making bids, and your friends are few and far between among them (in the case where we flub alignment).