and, i’d guess that one big universe is more than twice as Fun as two small universes, so even if there were no transaction costs it wouldn’t be worth it
Perhaps, although I also think it’s plausible that future humanity would find universes in which we’re wiped out completely to be particularly sad and so worth spending a disproportionate amount of Fun to partially recover.
it’s like 1% survive and 1% make paperclips and 1% make giant gold obelisks, etc.
I don’t think this changes the situation since future humanity can just make paperclips with probability 1⁄99, obelisks with probability 1⁄99, etc. putting us in an identical bargaining situation with each possible UFAI as if there was only one.
maybe the branches that survive decide to spend some stars on a mixture of plausible-human-UFAI-goals in exchange for humans getting an asteroid in lots of places, if the transaction costs are low and the returns-to-scale diminish enough and the visibility works out favorably. but it looks pretty dicey to me, and the point about discussing aliens first still stands.
Yeah, this is the scenario I think is most likely. As you say it’s a pretty uncomfortable thing to lay our hopes on, but I thought it was more plausible than any of the scenarios brought up in the post so deserved a mention. It doesn’t feel intuitively obvious to me that aliens are a better bet—I guess it comes down to how much trust you have in generic aliens being nice VS. how likely AIs are to be motivated by weird anthropic considerations(in a way that we can actually predict).
Paperclips vs obelisks does make the bargaining harder because clippy would be offered fewer expected paperclips.
My current guess is we survive if our CEV puts a steep premium on that. Of course, such hopes of trade ex machina shouldn’t affect how we orient to the alignment problem, even if they affect our personal lives. We should still play to win.
Paperclips vs obelisks does make the bargaining harder because clippy would be offered fewer expected paperclips.
But Clippy also controls fewer expected universes, so the relative bargaining positions of humans VS UFAIs remain the same(compared to a scenario in which all UFAIs had the same value system)
Ah right, because Clippy has less measure, and so has less to offer, so less needs to be offered to it. Nice catch! Guess I’ve been sort of heeding Nate’s advice not to think much about this. :)
Of course, there would still be significant overhead from trading with and/or outbidding sampled plethoras of UFAIs, vs the toy scenario where it’s just Clippy.
I currently suspect we still get more survival measure from aliens in this branch who solved their alignment problems and have a policy of offering deals to UFAIs that didn’t kill their biological boot loaders. Such aliens need not be motivated by compassion to the extent that aboriginals form a Schelling bloc, handwave appendagewave. (But we should still play to win, like they did.)
Perhaps, although I also think it’s plausible that future humanity would find universes in which we’re wiped out completely to be particularly sad and so worth spending a disproportionate amount of Fun to partially recover.
I don’t think this changes the situation since future humanity can just make paperclips with probability 1⁄99, obelisks with probability 1⁄99, etc. putting us in an identical bargaining situation with each possible UFAI as if there was only one.
Yeah, this is the scenario I think is most likely. As you say it’s a pretty uncomfortable thing to lay our hopes on, but I thought it was more plausible than any of the scenarios brought up in the post so deserved a mention. It doesn’t feel intuitively obvious to me that aliens are a better bet—I guess it comes down to how much trust you have in generic aliens being nice VS. how likely AIs are to be motivated by weird anthropic considerations(in a way that we can actually predict).
Paperclips vs obelisks does make the bargaining harder because clippy would be offered fewer expected paperclips.
My current guess is we survive if our CEV puts a steep premium on that. Of course, such hopes of trade ex machina shouldn’t affect how we orient to the alignment problem, even if they affect our personal lives. We should still play to win.
But Clippy also controls fewer expected universes, so the relative bargaining positions of humans VS UFAIs remain the same(compared to a scenario in which all UFAIs had the same value system)
Ah right, because Clippy has less measure, and so has less to offer, so less needs to be offered to it. Nice catch! Guess I’ve been sort of heeding Nate’s advice not to think much about this. :)
Of course, there would still be significant overhead from trading with and/or outbidding sampled plethoras of UFAIs, vs the toy scenario where it’s just Clippy.
I currently suspect we still get more survival measure from aliens in this branch who solved their alignment problems and have a policy of offering deals to UFAIs that didn’t kill their biological boot loaders. Such aliens need not be motivated by compassion to the extent that aboriginals form a Schelling bloc, handwave appendagewave. (But we should still play to win, like they did.)