It’s really weird that we find ourselves at the hinge of history. One proposed explanation is that we’re part of an ancestor simulation. It makes sense that ancestor simulations would be focused on the hinge of history. But unless ancestor simulations make up a significant proportion of future minds, it’s still weird that we find ourselves in a simulation rather than actually experiencing the future.
Why might ancestor simulations make up a significant proportion of future minds? One possible answer is that ancestor simulations provide the information required for acausal cooperation across large worlds (known as ECL). If knowing the values that civilizations developed after the hinge of history allowed you to trade with them, then civilizations should focus a significant proportion of their resources on simulating the hinges of history experienced by many other civilizations.
I presume that this explanation has been proposed before, and probably in more detail; links appreciated if so.
I’ve done work in this area, but never been particularly enthusiastic about promoting it. It usually turns out to be inactionable/grim/likely to rouse a panic.
This is a familiar thought, to me.
A counterargument occurs to me: Isn’t it arguable that most of what we need to know about a species, to trade with it, is just downstream of its biology? Of course we talk a lot about our contingent factors, our culture, our history, but I think we’re pretty much just the same animals we’ve always been, extrapolated. If that’s the case, wouldn’t far more simulation time be given to evolutionary histories, rather than than simulating variations of hinges? Anthropic measure wouldn’t be especially concentrated on the hinge, it might even skip it.
Countercounterargument: it also seems like there are a lot of anti-inductive effects in the histories of technological societies that might mean you really do have to simulate it all to find out how values settle or just to figure out the species’ rate of success. Evolutionary histories might also have a lot more computationally compressible shared structure.
I’d be surprised if this, the world in front of us, were a pareto-efficient bargaining outcome. Hinge histories fucking suck to live in and I would strongly prefer a trade protocol that instantiated as few of them as possible. I wouldn’t expect many to be necessary, certainly not enough to significantly outweigh the… thing that is supposed to come after. (at this point, I’d prefer to take it into DMs/call)
Thinking about this stuff again, something occurred to me. Please make sure to keep, in cold storage, copies of misaligned AGIs that you may produce, when you catch them. It’s important. This policy could save us.
Please make sure to keep, in cold storage, copies of misaligned AGIs that you may produce, when you catch them. It’s important. This policy could save us.
Would you care to expand on your remark? I don’t see how it follows from what you said above it.
Yeah, it wasn’t argued. I wasn’t sure whether it needed to be explained, for Richard. I don’t remember how I wound up getting there from the rest of the comment, I think it was just in the same broad neighborhood.
World champion in Chess: “It’s really weird that I’m world champion. It must be a simulation or I must dream or..”
Joe Biden: “It’s really weird I’m president, it must be a simul...” (Donald Trump: “It really really makes no sense I’m president, it MUST be a s..”)
David Chalmers: “It’s really weird I’m providing the seminal hard problem formulation. It must be a sim..”
...
Rationalist (before finding lesswrong): “Gosh, all these people around me, really wired differently than I am. I must be in a simulation.”
Something seems funny to me in the anthropic reasoning in these examples, and in yours too.
Of course we have one world champion in chess or anything, so a reasoning that means that world champion quasi by definition question’s his champion-ness, seems odd. Then, I’d be lying if I claimed I could not intuitively empathize with his wondering about the odds of exactly him being the world champion among 9 billions.
This leads me to the following, that eventually +- satisfies me:
Hypothetically, imagine each generation has only 1 person, and there’s rebirth: it’s just a rebirth of the same person, in a different generation.
With some simplification:
For 10 000 generations you lived in stone-age conditions
For 1 generation—today—you’re the hinge-of-history generation
X (X being: you won’t live anymore at all as AI killed everything; or you live 1 mio generations happily, served by AI, or what have you).
The 10 000 you’s didn’t have much reason to wonder about hinge of history, and so doesn’t happen to think about it. The one you, in the hinge-of-history generation, by definition, has much reasons to think about the hinge-of-history, and does think about it.
So, it has becomes a bit like a lottery game, which you repeat so many times until you naturally once draw the winning number. At that lucky punch, there’s no reason to think “Unlikely, it’s probably a simulation”, or anything.
I have the impression in the similar way, the reincarnated guy should not wonder about it, neither when his memory is wiped each time, and in the same vein (hm, am I sloppy here? that’s the hinge of my argument) neither you have to wonder too much.
In general I don’t think anthropic reasoning like this holds any substance. We experience what we experience, and condition on that in forming models about what it is and where we are in it.
We don’t get to make millions of bits of observations about being a human in a technological society, use those observations to extrapolate the possibility of supergalactic multitudes of consciousness, and then express surprise at a pathetic few dozen bits of improbability of not being one of those multitudes. We already used those bits (and a great many more!) in forming our model in the first place.
It’s really weird that we find ourselves at the hinge of history. One proposed explanation is that we’re part of an ancestor simulation. It makes sense that ancestor simulations would be focused on the hinge of history. But unless ancestor simulations make up a significant proportion of future minds, it’s still weird that we find ourselves in a simulation rather than actually experiencing the future.
Why might ancestor simulations make up a significant proportion of future minds? One possible answer is that ancestor simulations provide the information required for acausal cooperation across large worlds (known as ECL). If knowing the values that civilizations developed after the hinge of history allowed you to trade with them, then civilizations should focus a significant proportion of their resources on simulating the hinges of history experienced by many other civilizations.
I presume that this explanation has been proposed before, and probably in more detail; links appreciated if so.
I’ve done work in this area, but never been particularly enthusiastic about promoting it. It usually turns out to be inactionable/grim/likely to rouse a panic.
This is a familiar thought, to me.
A counterargument occurs to me: Isn’t it arguable that most of what we need to know about a species, to trade with it, is just downstream of its biology? Of course we talk a lot about our contingent factors, our culture, our history, but I think we’re pretty much just the same animals we’ve always been, extrapolated. If that’s the case, wouldn’t far more simulation time be given to evolutionary histories, rather than than simulating variations of hinges? Anthropic measure wouldn’t be especially concentrated on the hinge, it might even skip it.
Countercounterargument: it also seems like there are a lot of anti-inductive effects in the histories of technological societies that might mean you really do have to simulate it all to find out how values settle or just to figure out the species’ rate of success. Evolutionary histories might also have a lot more computationally compressible shared structure.
I’d be surprised if this, the world in front of us, were a pareto-efficient bargaining outcome. Hinge histories fucking suck to live in and I would strongly prefer a trade protocol that instantiated as few of them as possible. I wouldn’t expect many to be necessary, certainly not enough to significantly outweigh the… thing that is supposed to come after. (at this point, I’d prefer to take it into DMs/call)
Thinking about this stuff again, something occurred to me. Please make sure to keep, in cold storage, copies of misaligned AGIs that you may produce, when you catch them. It’s important. This policy could save us.
Would you care to expand on your remark? I don’t see how it follows from what you said above it.
Yeah, it wasn’t argued. I wasn’t sure whether it needed to be explained, for Richard. I don’t remember how I wound up getting there from the rest of the comment, I think it was just in the same broad neighborhood.
Regardless, yes, I totally can expand on that. Here, I wrote it up: Do Not Delete your Misaligned AGI.
World champion in Chess: “It’s really weird that I’m world champion. It must be a simulation or I must dream or..”
Joe Biden: “It’s really weird I’m president, it must be a simul...”
(Donald Trump: “It really really makes no sense I’m president, it MUST be a s..”)
David Chalmers: “It’s really weird I’m providing the seminal hard problem formulation. It must be a sim..”
...
Rationalist (before finding lesswrong): “Gosh, all these people around me, really wired differently than I am. I must be in a simulation.”
Something seems funny to me in the anthropic reasoning in these examples, and in yours too.
Of course we have one world champion in chess or anything, so a reasoning that means that world champion quasi by definition question’s his champion-ness, seems odd. Then, I’d be lying if I claimed I could not intuitively empathize with his wondering about the odds of exactly him being the world champion among 9 billions.
This leads me to the following, that eventually +- satisfies me:
Hypothetically, imagine each generation has only 1 person, and there’s rebirth: it’s just a rebirth of the same person, in a different generation.
With some simplification:
For 10 000 generations you lived in stone-age conditions
For 1 generation—today—you’re the hinge-of-history generation
X (X being: you won’t live anymore at all as AI killed everything; or you live 1 mio generations happily, served by AI, or what have you).
The 10 000 you’s didn’t have much reason to wonder about hinge of history, and so doesn’t happen to think about it. The one you, in the hinge-of-history generation, by definition, has much reasons to think about the hinge-of-history, and does think about it.
So, it has becomes a bit like a lottery game, which you repeat so many times until you naturally once draw the winning number. At that lucky punch, there’s no reason to think “Unlikely, it’s probably a simulation”, or anything.
I have the impression in the similar way, the reincarnated guy should not wonder about it, neither when his memory is wiped each time, and in the same vein (hm, am I sloppy here? that’s the hinge of my argument) neither you have to wonder too much.
Nit: ECL is just one of several kinds of acausal cooperation across large worlds.
What are the others?
In general I don’t think anthropic reasoning like this holds any substance. We experience what we experience, and condition on that in forming models about what it is and where we are in it.
We don’t get to make millions of bits of observations about being a human in a technological society, use those observations to extrapolate the possibility of supergalactic multitudes of consciousness, and then express surprise at a pathetic few dozen bits of improbability of not being one of those multitudes. We already used those bits (and a great many more!) in forming our model in the first place.