This idea keeps getting rediscovered, thanks for writing it up! The key ingredient is acausal trade between aligned and unaligned superintelligences, rather than between unaligned superintelligences and humans. Simulation isn’t a key ingredient; it’s a more general question about resource allocation across branches.
epistemic meristem
Karma: 35
- Oct 19, 2022, 1:35 AM; 1 point) 's comment on Decision theory does not imply that we get to have nice things by (
Too much power, I would assume. Yet he didn’t kill Bo Xilai.
Why the downboats? People new to LW jargon probably wouldn’t realize “money brain” is a typo.
Nitpick: maybe aligned and unaligned superintelligences acausally trade across future branches? If so, maybe on the mainline we’re left with a very small yet nonzero fraction of the cosmic endowment, a cosmic booby prize if you will?
“Booby prize with dignity” sounds like a bit of an oxymoron...
- May 4, 2022, 6:18 PM; 7 points) 's comment on Negotiating Up and Down the Simulation Hierarchy: Why We Might Survive the Unaligned Singularity by (
You have a money brain? That’s awesome, most of us only have monkey brains! 🙂
What does “corrupt” mean in this context? What are some examples of noncorrupt employers?
I think it would be helpful to note at the top of the post that it’s crossposted here. I initially misinterpreted “this blog” in the first sentence as referring to LW.