But, sure, if you’re somehow magically unhackable and very good at keeping the paperclipper boxed until you fully understand it, then there’s a chance you can trade, and you have the privilege of facing the next host of obstacles.
Now’s your chance to figure out what the next few obstacles are without my giving you spoilers first. Feel free to post your list under spoiler tags in the comment section.
Ideas:
Someone else definitely builds and deploys an UFAI before you finish studying Clippy. (This would almost always happen?)
Clippy figures out that it’s in a prisoner’s dilemma with the other cobbled-together UFAI humanity builds, wherein each UFAI is given the option to shake hands with Humanity or pass 100% of the universe to whichever UFAI Humanity eventually otherwise deploys. Clippy makes some models, does some decision theory, predicts that if it defects and handshakes other UFAIs are more likely to defect too based on their models, and decides to not trade. The multiverse contains twice as many paperclips.
The fact that you’re going to forfeit half of the universe to Clippy leaks. You lose, but you get the rare novelty Game Over screen as compensation?
Interlocutor: Well, maybe we can train the infant paperclipper in games of increasing complexity, so that it’s never quite sure whether it’s in reality yet. The paperclipper will then be uncertain about whether we humans are simulating it, and will want to cater to our preferences to some extent.
Me: Uh, yeah, your paperclipper will be able to tell when it is finally in the real world.
Interlocutor: What? How?
Me: I suggest maybe spending five minutes thinking about your own answer to that question before I give mine.
Ideas:
It could just act friendly for enough time to be sure it’s not in a simulation on the grounds that a civilization that could simulate what it was doing on its computers wouldn’t simulation-fakeout it for non-exotic reasons. Imagine Clippy mulling over its galaxy-sized supercomputing cluster and being like “Hm, I’m not sure if I’m still in those crude simulations those stupid monkeys put me in or I’m in the real world.”
I would be surprised if we’re able to build a simulation (before we build AGI) that I couldn’t discern as a simulation 99.99% of the time. Simulation technology just won’t advance fast enough.
Ideas:
Someone else definitely builds and deploys an UFAI before you finish studying Clippy. (This would almost always happen?)
Clippy figures out that it’s in a prisoner’s dilemma with the other cobbled-together UFAI humanity builds, wherein each UFAI is given the option to shake hands with Humanity or pass 100% of the universe to whichever UFAI Humanity eventually otherwise deploys. Clippy makes some models, does some decision theory, predicts that if it defects and handshakes other UFAIs are more likely to defect too based on their models, and decides to not trade. The multiverse contains twice as many paperclips.
The fact that you’re going to forfeit half of the universe to Clippy leaks. You lose, but you get the rare novelty Game Over screen as compensation?
Ideas:
It could just act friendly for enough time to be sure it’s not in a simulation on the grounds that a civilization that could simulate what it was doing on its computers wouldn’t simulation-fakeout it for non-exotic reasons. Imagine Clippy mulling over its galaxy-sized supercomputing cluster and being like “Hm, I’m not sure if I’m still in those crude simulations those stupid monkeys put me in or I’m in the real world.”
I would be surprised if we’re able to build a simulation (before we build AGI) that I couldn’t discern as a simulation 99.99% of the time. Simulation technology just won’t advance fast enough.