In the broadest sense, the hypothesis is somewhat trivial.
No, I don’t think so.
For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the “right” exchange, such that it is indistinguishable from a human.
Are you making the Searle’s Chinese Room argument?
In any case, even if we accept the purely functional approach, it doesn’t seem obvious to me that you must be able to create a simulation which picks the “right” answer in the future. You don’t get to run 2^n instances and say “Pick whichever one you satisfies your criteria”.
Well, I did say “In the broadest sense”, so yes, that does imply a purely functional approach.
You don’t get to run 2^n instances and say “Pick whichever one you satisfies your criteria”.
The claim was that it is possible in principle. And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.
No, I don’t think so.
Are you making the Searle’s Chinese Room argument?
In any case, even if we accept the purely functional approach, it doesn’t seem obvious to me that you must be able to create a simulation which picks the “right” answer in the future. You don’t get to run 2^n instances and say “Pick whichever one you satisfies your criteria”.
Well, I did say “In the broadest sense”, so yes, that does imply a purely functional approach.
The claim was that it is possible in principle. And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.
That’s not simulating intelligence. That’s just a crude exhaustive search.
And I am not sure you have enough energy in the universe to run 2^n instances, anyway.