I don’t think simulations help. Once you start simulating yourself to arbitrary precision, that being would have the same thoughts as you, including “Hey, I should run a simulation”, and then you’re back to square one.
More generally, when you think about how to interact with other people, you are simulating them, in a crude sense, using your own mind as a shortcut. See empathic inference.
If you become superintelligent and have lots more computing resources, then your simulations of other minds themselves become minds, with experiences indistinguishable from yours, and make the same decisions, for the same reasons. What’s worse, the simulations have the same moral weight! See EY’s nonperson predicates.
(This has inspired me to consider “Virtualization Decision Theory”, VDT, which says, “Act as though setting the output of yourself in a simulation run by beings deciding how to interact with a realer version of you that you care about more.”)
Here are my earlier remarks on the simulated-world / religion parallel.
Simulations might be of limited utility (given limited computational resources), but they’d certainly help.
Without simulations, it’s very difficult to run complex experiments of how an entity behaves in a series of situations, with the only changing variable being the entity’s initial beliefs.
I think that underscores the crucial difference about doing simulations in this context: You’re simulating a being that can itself do simulations, and this is the defining aspect of that being’s cognitive architecture.
No, it is not particularly relevant to run such simulations where the entities have sufficient technology to run advanced simulations themselves. Almost all that we’d want to find out in this context we can use simulations of entities of a limited technological level for.
I also think that what intelligence I or anyone else has is imperfect and therefore “of limited utility”, but this doesn’t mean that our intelligence wouldn’t be “necessary” for a very large number of tasks, or even that it wouldn’t be “the ultimate tool” for most of those tasks.
So I don’t see at all what would be the contradiction you’re referring to.
I don’t think simulations help. Once you start simulating yourself to arbitrary precision, that being would have the same thoughts as you, including “Hey, I should run a simulation”, and then you’re back to square one.
More generally, when you think about how to interact with other people, you are simulating them, in a crude sense, using your own mind as a shortcut. See empathic inference.
If you become superintelligent and have lots more computing resources, then your simulations of other minds themselves become minds, with experiences indistinguishable from yours, and make the same decisions, for the same reasons. What’s worse, the simulations have the same moral weight! See EY’s nonperson predicates.
(This has inspired me to consider “Virtualization Decision Theory”, VDT, which says, “Act as though setting the output of yourself in a simulation run by beings deciding how to interact with a realer version of you that you care about more.”)
Here are my earlier remarks on the simulated-world / religion parallel.
Simulations might be of limited utility (given limited computational resources), but they’d certainly help.
Without simulations, it’s very difficult to run complex experiments of how an entity behaves in a series of situations, with the only changing variable being the entity’s initial beliefs.
I think that underscores the crucial difference about doing simulations in this context: You’re simulating a being that can itself do simulations, and this is the defining aspect of that being’s cognitive architecture.
No, it is not particularly relevant to run such simulations where the entities have sufficient technology to run advanced simulations themselves. Almost all that we’d want to find out in this context we can use simulations of entities of a limited technological level for.
If you believe that, then your earlier estimate of “necessary” was very far off-target.
I also think that what intelligence I or anyone else has is imperfect and therefore “of limited utility”, but this doesn’t mean that our intelligence wouldn’t be “necessary” for a very large number of tasks, or even that it wouldn’t be “the ultimate tool” for most of those tasks.
So I don’t see at all what would be the contradiction you’re referring to.