I described some problems with Tallinn’s attempt here—under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
We seem pretty damn close to me! A decade or so is not very long.
(Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
More likely that there are a range of historical “tipping points” that they might want to explore—perhaps including the invention of language and the origin of humans.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless—but any average of number of simulations run per-world would probably be low—since so many sims would be leaf nodes—and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller—and less likely to contain many observers.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially).
What substrate are they running these simulations on?
I had another look at Tallinn’s presentation, and it seems he is rather vague on this… rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom’s original simulation argument provides some lower bounds—and references—on what could be done using just classical computation.
We seem pretty damn close to me! A decade or so is not very long.
In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
More likely that there are a range of historical “tipping points” that they might want to explore—perhaps including the invention of language and the origin of humans.
Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless—but any average of number of simulations run per-world would probably be low—since so many sims would be leaf nodes—and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller—and less likely to contain many observers.
What substrate are they running these simulations on?
I had another look at Tallinn’s presentation, and it seems he is rather vague on this… rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom’s original simulation argument provides some lower bounds—and references—on what could be done using just classical computation.