Basically, all of the decision theories are just deducing payoffs and calculating argmax, but there’s a subtle complication with regard to the deduction of payoffs. I’m almost done with the post that explains it.
Well, you guys instead of using x for the choice and doing algebra to handle x on both sides of equations, start going meta and considering yourselves inside simulators, which, albeit intellectually stimulating, is unnecessary and makes it hard for you to think straight.
If I needed to calculate ideal orientation of a gun assuming that the enemy can predict orientation of a gun perfectly, i’d just use x for the orientation, and solve for both ballistics and enemy’s evasive action.
Also, the newcomb’s now sounds to me like simple case of alternative english to math conversions when processing the problem statement, not even a case of calculating anything differently. There’s the prediction, but there’s also the box contents being constant., you can’t put both into math, you can in English but human languages are screwy and we all know it.
I finished the post that explains the problem with the decision theory you proposed- calculating payoffs in the most direct way risks spurious counterfactuals. (I hope you don’t mind that I called it “naive decision theory”, since you yourself said it seemed like the obvious straightforward thing to do.)
Basically, all of the decision theories are just deducing payoffs and calculating argmax, but there’s a subtle complication with regard to the deduction of payoffs. I’m almost done with the post that explains it.
Well, you guys instead of using x for the choice and doing algebra to handle x on both sides of equations, start going meta and considering yourselves inside simulators, which, albeit intellectually stimulating, is unnecessary and makes it hard for you to think straight.
If I needed to calculate ideal orientation of a gun assuming that the enemy can predict orientation of a gun perfectly, i’d just use x for the orientation, and solve for both ballistics and enemy’s evasive action.
Also, the newcomb’s now sounds to me like simple case of alternative english to math conversions when processing the problem statement, not even a case of calculating anything differently. There’s the prediction, but there’s also the box contents being constant., you can’t put both into math, you can in English but human languages are screwy and we all know it.
I finished the post that explains the problem with the decision theory you proposed- calculating payoffs in the most direct way risks spurious counterfactuals. (I hope you don’t mind that I called it “naive decision theory”, since you yourself said it seemed like the obvious straightforward thing to do.)