BTW, an observation: if i want to maximize the distance at which the thrown stone lands, assuming constant initial speed and zero height, I work out the algebra—I have unknown, x, the shoot angle, and I have laws of physics that express distance as function of x, and I find best x. In newcomb’s, I have x=my choice, I have been given rules of the world, whereby the payoff formula includes the x itself, i calculate best x, which is one-box (not surprisingly). The smoking lesion also works fine. Once you stop invoking your built-in decision theory on confusing cases, things are plain and clear.
At this point, how well you perform depends to what sort of axiom system you are using to solve for x, and by Godel’s theorem, there will be some problem that is going to get ya, i.e. cause failure.
At this point, how well you perform depends to what sort of axiom system you are using to solve for x, and by Godel’s theorem, there will be some problem that is going to get ya, i.e. cause failure.
This doesn’t seem like something that needs to be solvable. You can you diagonalization to defeat any decision theory; just award some utility iff the agent chooses the option not recommended by that decision theory. A different decision theory can choose the other option, but that decision theory has acausal influence over the right answer that prevents it from winning.
Yep. Just wanted to mention that every theory where you can do diagonalization, i.e. every formal one, can be defeated.
My point is that one could just make the choice be x, then express the payoff in terms of x, then solve for x that gives maximum payoff, using the methods of algebra, instead of trying to redefine algebra in some stupid sense of iteration of values of x until finding an equality (then omg it fails at x=x), and trying to reinvent already existent reasoning (in form of theorem proving).
Still not quite it.
BTW, an observation: if i want to maximize the distance at which the thrown stone lands, assuming constant initial speed and zero height, I work out the algebra—I have unknown, x, the shoot angle, and I have laws of physics that express distance as function of x, and I find best x. In newcomb’s, I have x=my choice, I have been given rules of the world, whereby the payoff formula includes the x itself, i calculate best x, which is one-box (not surprisingly). The smoking lesion also works fine. Once you stop invoking your built-in decision theory on confusing cases, things are plain and clear.
At this point, how well you perform depends to what sort of axiom system you are using to solve for x, and by Godel’s theorem, there will be some problem that is going to get ya, i.e. cause failure.
This doesn’t seem like something that needs to be solvable. You can you diagonalization to defeat any decision theory; just award some utility iff the agent chooses the option not recommended by that decision theory. A different decision theory can choose the other option, but that decision theory has acausal influence over the right answer that prevents it from winning.
Yep. Just wanted to mention that every theory where you can do diagonalization, i.e. every formal one, can be defeated.
My point is that one could just make the choice be x, then express the payoff in terms of x, then solve for x that gives maximum payoff, using the methods of algebra, instead of trying to redefine algebra in some stupid sense of iteration of values of x until finding an equality (then omg it fails at x=x), and trying to reinvent already existent reasoning (in form of theorem proving).