Can’t this be modelled as uncertainty over functional equivalence? (or over input-output maps)?
Hm, that’s an interesting point. Is what we care about just the brute input-output map? If we’re faced with a black-box predictor, then yes, all that matters is the correlation even if we don’t know the method. But I don’t think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation—we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.
Hm, that’s an interesting point. Is what we care about just the brute input-output map? If we’re faced with a black-box predictor, then yes, all that matters is the correlation even if we don’t know the method. But I don’t think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation—we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.