Simulations might be of limited utility (given limited computational resources), but they’d certainly help.
Without simulations, it’s very difficult to run complex experiments of how an entity behaves in a series of situations, with the only changing variable being the entity’s initial beliefs.
I think that underscores the crucial difference about doing simulations in this context: You’re simulating a being that can itself do simulations, and this is the defining aspect of that being’s cognitive architecture.
No, it is not particularly relevant to run such simulations where the entities have sufficient technology to run advanced simulations themselves. Almost all that we’d want to find out in this context we can use simulations of entities of a limited technological level for.
I also think that what intelligence I or anyone else has is imperfect and therefore “of limited utility”, but this doesn’t mean that our intelligence wouldn’t be “necessary” for a very large number of tasks, or even that it wouldn’t be “the ultimate tool” for most of those tasks.
So I don’t see at all what would be the contradiction you’re referring to.
Simulations might be of limited utility (given limited computational resources), but they’d certainly help.
Without simulations, it’s very difficult to run complex experiments of how an entity behaves in a series of situations, with the only changing variable being the entity’s initial beliefs.
I think that underscores the crucial difference about doing simulations in this context: You’re simulating a being that can itself do simulations, and this is the defining aspect of that being’s cognitive architecture.
No, it is not particularly relevant to run such simulations where the entities have sufficient technology to run advanced simulations themselves. Almost all that we’d want to find out in this context we can use simulations of entities of a limited technological level for.
If you believe that, then your earlier estimate of “necessary” was very far off-target.
I also think that what intelligence I or anyone else has is imperfect and therefore “of limited utility”, but this doesn’t mean that our intelligence wouldn’t be “necessary” for a very large number of tasks, or even that it wouldn’t be “the ultimate tool” for most of those tasks.
So I don’t see at all what would be the contradiction you’re referring to.