But hold on: if you truly do start from an untainted Occamian prior, you have to rule out many universes before you get to this one. In short, we don’t actually want truly general intelligence. Rather, we want intelligence with a strong prior tilted toward the working of this universe.
Sure, we want to bias the machine quite strongly towards hypotheses that we believe. This would make the job of the SI easier.
Very true—but only if you can find a way to represent your knowledge in a way conducive to the SI’s Bayesian updating. At that point, however, you run into the problem of telling your SI knowledge that it couldn’t generate for itself.
Let’s say it found it had a high prior on the equations the Cornell team derived. But, for some reason, those equations seemed to be inapplicable to most featherless bipeds. Or even feathered bipeds! So, it wants to go back and identify the data that would have amplified the odds it assigned to those equations. Would it know to seek out heavy, double-pinned devices and track the linkages’ x and y positions?
Would it know when the equations even apply? Or would the prior just unnecessarily taint any future inferences about phenomena too many levels above Newtonian mechanics (i.e. social psychology)?
Would it know when the equations even apply? Or would the prior just unnecessarily taint any future inferences about phenomena too many levels above Newtonian mechanics (i.e. social psychology)?
Good point. That’s why you don’t want to go overboard with priors. However, even human psychology has underlying statistical laws governing it.
Sure, we want to bias the machine quite strongly towards hypotheses that we believe. This would make the job of the SI easier.
Very true—but only if you can find a way to represent your knowledge in a way conducive to the SI’s Bayesian updating. At that point, however, you run into the problem of telling your SI knowledge that it couldn’t generate for itself.
Let’s say it found it had a high prior on the equations the Cornell team derived. But, for some reason, those equations seemed to be inapplicable to most featherless bipeds. Or even feathered bipeds! So, it wants to go back and identify the data that would have amplified the odds it assigned to those equations. Would it know to seek out heavy, double-pinned devices and track the linkages’ x and y positions?
Would it know when the equations even apply? Or would the prior just unnecessarily taint any future inferences about phenomena too many levels above Newtonian mechanics (i.e. social psychology)?
Good point. That’s why you don’t want to go overboard with priors. However, even human psychology has underlying statistical laws governing it.