The fact that humans in the simulation would rather prefer to be spared, doesn’t say anything about the intentions of the creators of the simulation. For all the AI knows, it can be created by a different AI and tested for capacity instead of human notion of “ethics”.
Because they wanted to see how well the AI manages to achieve its goals in this specific circumstances, for example.
But the actual answer is: for literally any reason. You are talking about 4.54e-10 probabilities. Surely the all possible combined alternative reasons gives more probability than that.
Sure. But I think you’re reading my argument to be stronger than I mean it to be. Which is partially my fault since I made my previous replies a bit too short, and for that I apologize.
What I’m doing here is presenting one particular simulation scenario that (to me) seems quite plausible within the realm of simulations. I’m not claiming that that one scenario dominates all others combined. But luckily that stronger claim is really not necessary to argue against Eliezer’s point: the weaker one suffices. Indeed, if the scenario I’m presenting is more than 4.5e-10 likely (and I do think it’s much more likely than that, probably by a few orders of magnitude), than it is more than enough to outweigh the practical cost of the ASI having to build a Dyson shell with a hole with the order of 4.5e-10 of it’s surface area.
Now, that scenario is (I claim) the most likely one, conditional of course on a simulation taking place to begin with. The other candidate simulation scenarios are various, and none of them seems particularly likely, though combined they might well outweigh this one in terms of mass probability, as I already acknowledged. But so what? Are you really claiming that the distribution of those other simulation scenarios is skewed enough to tilt the scales back to the doom side? It might be, but that’s a much harder argument to make. I’m approximately completely unsure, which seems way better than the 99%+ chance Eliezer seems to give to total doom. So I guess I’d count that as good news.
The reason is that creators presumably want the former but not the latter, which is why they’d be running a simulation in the first place.
The fact that humans in the simulation would rather prefer to be spared, doesn’t say anything about the intentions of the creators of the simulation. For all the AI knows, it can be created by a different AI and tested for capacity instead of human notion of “ethics”.
Why else would the creator of the simulation bother simulating humans creating the ASI?
Because they wanted to see how well the AI manages to achieve its goals in this specific circumstances, for example.
But the actual answer is: for literally any reason. You are talking about 4.54e-10 probabilities. Surely the all possible combined alternative reasons gives more probability than that.
Sure. But I think you’re reading my argument to be stronger than I mean it to be. Which is partially my fault since I made my previous replies a bit too short, and for that I apologize.
What I’m doing here is presenting one particular simulation scenario that (to me) seems quite plausible within the realm of simulations. I’m not claiming that that one scenario dominates all others combined. But luckily that stronger claim is really not necessary to argue against Eliezer’s point: the weaker one suffices. Indeed, if the scenario I’m presenting is more than 4.5e-10 likely (and I do think it’s much more likely than that, probably by a few orders of magnitude), than it is more than enough to outweigh the practical cost of the ASI having to build a Dyson shell with a hole with the order of 4.5e-10 of it’s surface area.
Now, that scenario is (I claim) the most likely one, conditional of course on a simulation taking place to begin with. The other candidate simulation scenarios are various, and none of them seems particularly likely, though combined they might well outweigh this one in terms of mass probability, as I already acknowledged. But so what? Are you really claiming that the distribution of those other simulation scenarios is skewed enough to tilt the scales back to the doom side? It might be, but that’s a much harder argument to make. I’m approximately completely unsure, which seems way better than the 99%+ chance Eliezer seems to give to total doom. So I guess I’d count that as good news.