Can’t this be modelled as uncertainty over functional equivalence? (or over input-output maps)?
Hm, that’s an interesting point. Is what we care about just the brute input-output map? If we’re faced with a black-box predictor, then yes, all that matters is the correlation even if we don’t know the method. But I don’t think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation—we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.
I suggest you check with Nate what exactly he thinks, but my opinion is:
I think Nate agrees with this, and any lack of functional equivalence is due to not being able to fully specify that yet.
Can’t this be modelled as uncertainty over functional equivalence? (or over input-output maps)?
Hm, that’s an interesting point. Is what we care about just the brute input-output map? If we’re faced with a black-box predictor, then yes, all that matters is the correlation even if we don’t know the method. But I don’t think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation—we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.