I would say that you (as a real human in the present time) are uncertain about your source code in the traditional sense of the word “uncertain”. Once we have brain scans and ems and such, if you get scanned and have access to the scan, you’re probably uncertain in something more like a logical uncertainty sense: you have access, and the ability to answer some questions, but you don’t “know” everything that is implied by that knowledge.
Indexical uncertainty can apply to a perfect Bayesian reasoner. (Right? I mean, given that those can’t exist in the real world,...) So it doesn’t feel like it’s indexical.
Does it make sense to talk about a “computationally-limited but otherwise perfect Bayesian reasoner”? Because that reasoner can exhibit logical uncertainty, but I don’t think it exhibits source code uncertainty in the sense that you do, namely that you have trouble predicting your own future actions or running yourself in simulation.
Interesting!
I would say that you (as a real human in the present time) are uncertain about your source code in the traditional sense of the word “uncertain”. Once we have brain scans and ems and such, if you get scanned and have access to the scan, you’re probably uncertain in something more like a logical uncertainty sense: you have access, and the ability to answer some questions, but you don’t “know” everything that is implied by that knowledge.
Indexical uncertainty can apply to a perfect Bayesian reasoner. (Right? I mean, given that those can’t exist in the real world,...) So it doesn’t feel like it’s indexical.
Does it make sense to talk about a “computationally-limited but otherwise perfect Bayesian reasoner”? Because that reasoner can exhibit logical uncertainty, but I don’t think it exhibits source code uncertainty in the sense that you do, namely that you have trouble predicting your own future actions or running yourself in simulation.