agree with your rot13. I guess it mostly just seemed related enough to be worth mentioning.
What are your philosophical quibbles with TDT, if I may ask?
A bunch of inferences which arise from the following: statement: “The supposition that an idealized rational agent’s mind interacts with the universe in any way other than via the actions it chooses to carry out contains logical paradoxes.”
I’m not confident in the opinion, it just represents my current state of understanding. When I’ve fleshed it out better in my head I will write it up and display it for criticism, unless I realize it is wrong during the intervening time (which is quite likely). One potential consequence is that TDT might ultimately be impossible to fully formalize without paradox via self-reference. The conclusion is that CDT is correct, as long as you follow the no-mind-reading rule. I reconstruct Newcombs and similar problems in such a way that the problem is similar but we aren’t reading the agent’s mind, and seem to always arrive at winning answers.
agree with your rot13. I guess it mostly just seemed related enough to be worth mentioning.
A bunch of inferences which arise from the following: statement: “The supposition that an idealized rational agent’s mind interacts with the universe in any way other than via the actions it chooses to carry out contains logical paradoxes.”
I’m not confident in the opinion, it just represents my current state of understanding. When I’ve fleshed it out better in my head I will write it up and display it for criticism, unless I realize it is wrong during the intervening time (which is quite likely). One potential consequence is that TDT might ultimately be impossible to fully formalize without paradox via self-reference. The conclusion is that CDT is correct, as long as you follow the no-mind-reading rule. I reconstruct Newcombs and similar problems in such a way that the problem is similar but we aren’t reading the agent’s mind, and seem to always arrive at winning answers.