Yes. More generally, when talking about “lack of X” as a design constraint, “inability to trivially create X from scratch” is assumed.
I try not to make general assumptions that would make the entire counterfactual in question untenable or ridiculous—this verges on such an instance. Making Bayesian inferences pertaining to observable features of the environment is one of the most basic features that can be expected in a functioning agent.
Note the “trivially.” An AI with unlimited computational resources and ability to run experiments could eventually figure out how humans think. The question is how long it would take, how obvious the experiments would be, and how much it already knew.
Yes. More generally, when talking about “lack of X” as a design constraint, “inability to trivially create X from scratch” is assumed.
I try not to make general assumptions that would make the entire counterfactual in question untenable or ridiculous—this verges on such an instance. Making Bayesian inferences pertaining to observable features of the environment is one of the most basic features that can be expected in a functioning agent.
Note the “trivially.” An AI with unlimited computational resources and ability to run experiments could eventually figure out how humans think. The question is how long it would take, how obvious the experiments would be, and how much it already knew.