We can’t simulate things of which we currently have no understanding. But if at some point in the future we know how to write AGIs, then we would be able to simulate them. And if we don’t know how to write AGIs then they won’t exist. So if we can write AGIs in the future then memory capacity and processor speed won’t impose a limit on our understanding. Any such limit would have to come from some other factor. So is there such a limit and where would it come from?
I think you’re missing what the goal of all this is. LessWrong contains a lot of reasoning and prediction about AIs that don’t exist, with details not filled in, because we want to decide which AI research paths we should and shouldn’t pursue, which AIs we should and shouldn’t create, etc. This kind of strategic thinking must necessarily be forward-looking, and based on incomplete information, because if it wasn’t, it would be too late to be useful.
So yes, after AGIs are already coded up and ready to run, we can learn things about their behavior by running them. This isn’t in dispute, it’s just not a solution to the questions we want to answer (on the timescales we need the answers).
We can’t simulate things of which we currently have no understanding. But if at some point in the future we know how to write AGIs, then we would be able to simulate them. And if we don’t know how to write AGIs then they won’t exist. So if we can write AGIs in the future then memory capacity and processor speed won’t impose a limit on our understanding. Any such limit would have to come from some other factor. So is there such a limit and where would it come from?
I think you’re missing what the goal of all this is. LessWrong contains a lot of reasoning and prediction about AIs that don’t exist, with details not filled in, because we want to decide which AI research paths we should and shouldn’t pursue, which AIs we should and shouldn’t create, etc. This kind of strategic thinking must necessarily be forward-looking, and based on incomplete information, because if it wasn’t, it would be too late to be useful.
So yes, after AGIs are already coded up and ready to run, we can learn things about their behavior by running them. This isn’t in dispute, it’s just not a solution to the questions we want to answer (on the timescales we need the answers).