(I appreciate object-level engagement in general, but this seems combatively worded.) (edit: I don’t think this or the original shortform deserved negative karma, that seems malicious/LW-norm-violating.)
The rest of this reply responds to arguments.
Why should the Earth superintelligence care about you, but not about the other 10^10^30 other causally independent ASIs that are latent in the hypothesis space, each capable of running enormous numbers of copies of the Earth ASI in various scenarios?
The example talks of a single ASI as a toy scenario to introduce the central idea.
The reader can extrapolate that one ASI’s actions won’t be relevant if other ASIs create a greater number of copies.
This is a simple extrapolation, but would be difficult for me to word into the post from the start.
It sounds like you think it would be infeasible/take too much compute for an ASI to estimate the distribution of entities simulating it, given the vast amount of possible entities. I have some probability on that being the case, but most probability on there being reasons for the estimation to be feasible:
e.g if there’s some set of common alignment failure modes that occur across civilizations, which tend to produce clusters of ASIs with similar values, and it ends up being the case that these clusters make up the majority of ASIs.
or if there’s a schelling-point for what value function to give the simulated copies, that many ASIs with different values would use precisely to make the estimation easy. E.g., a value function which results in an ASI being created locally which then gathers more compute, uses it to estimate the distribution of ASIs which engaged in this, and then maximizes the mix of their values.
(I feel confident (>90%) that there’s enough compute in a single reachable-universe-range to do the estimation, for reasons that are less well formed, but one generating intuition is that I can already reason a little bit about the distribution of superintelligences, as I have here, with the comparatively tiny amount of compute that is me)
On your second paragraph: See the last dotpoint in the original post, which describes a system ~matching what you’ve asserted as necessary, and in general see the emphasis that this attack would not work against all systems. I’m uncertain about which of the two classes (vulnerable and not vulnerable) are more likely to arise. It could definitely be the case that the vulnerable class is rare or almost never arises in practice.
But I don’t think it’s as simple as you’ve framed it, where the described scenario is impossible simply because a value function has been hardcoded in. The point was largely to show that what appears to be a system which will only maximize the function you hardcoded into it could actually do something else in a particular case—even though the function has indeed been manually entered by you.
(I appreciate object-level engagement in general, but this seems combatively worded.)
(edit: I don’t think this or the original shortform deserved negative karma, that seems malicious/LW-norm-violating.)
The rest of this reply responds to arguments.
The example talks of a single ASI as a toy scenario to introduce the central idea.
The reader can extrapolate that one ASI’s actions won’t be relevant if other ASIs create a greater number of copies.
This is a simple extrapolation, but would be difficult for me to word into the post from the start.
It sounds like you think it would be infeasible/take too much compute for an ASI to estimate the distribution of entities simulating it, given the vast amount of possible entities. I have some probability on that being the case, but most probability on there being reasons for the estimation to be feasible:
e.g if there’s some set of common alignment failure modes that occur across civilizations, which tend to produce clusters of ASIs with similar values, and it ends up being the case that these clusters make up the majority of ASIs.
or if there’s a schelling-point for what value function to give the simulated copies, that many ASIs with different values would use precisely to make the estimation easy. E.g., a value function which results in an ASI being created locally which then gathers more compute, uses it to estimate the distribution of ASIs which engaged in this, and then maximizes the mix of their values.
(I feel confident (>90%) that there’s enough compute in a single reachable-universe-range to do the estimation, for reasons that are less well formed, but one generating intuition is that I can already reason a little bit about the distribution of superintelligences, as I have here, with the comparatively tiny amount of compute that is me)
On your second paragraph: See the last dotpoint in the original post, which describes a system ~matching what you’ve asserted as necessary, and in general see the emphasis that this attack would not work against all systems. I’m uncertain about which of the two classes (vulnerable and not vulnerable) are more likely to arise. It could definitely be the case that the vulnerable class is rare or almost never arises in practice.
But I don’t think it’s as simple as you’ve framed it, where the described scenario is impossible simply because a value function has been hardcoded in. The point was largely to show that what appears to be a system which will only maximize the function you hardcoded into it could actually do something else in a particular case—even though the function has indeed been manually entered by you.