Please … Newcomb is a toy non-mathematizable problem and not a valid argument for anything at all. There must be a better example, or the entire problem is invalid.
There must be a better example, or the entire problem is invalid.
I’ve long thought that voting in general is largely isomorphic to Newcomb’s. If you cop out and don’t vote, then everyone like you will reason the same way and not vote, and your favored candidates/policies will fail; but if you vote then the reverse might happen; and if you then carry it one more step… If you could just decide to one-box/vote then maybe everyone else like you will.
Sorry, in voting you don’t play the singular boss role that you play in Newcomb’s problem. But it’s amusing how far democracy proponents will go to convince themselves that their vote matters. :-)
I haven’t worked it out rigorously (else you would’ve seen a post on it by now!), but it seems to me in close elections (Florida 2000, say) that thought process could be valid. Considering how small the margins sometimes are, and how much of the electorate doesn’t vote, it doesn’t strike me as implausible that there are enough people thinking like me to make a difference.
And of course we could just specify as a condition that you and yours are a bloc powerful enough to affect the election. (Maybe you’re numerous, maybe there’re only a few electors, whatever.)
But it’s amusing how far democracy proponents will go to convince themselves that their vote matters.
The problem with irrelevant ad hominems is that they’re very often based on flimsy evidence and so often wrong. I didn’t even vote last year because I figured my vote didn’t matter. I was not surprised.
In Newcomb’s problem you’re the boss, e.g. you can assign yourself a suitable utility function beforehand to keep the million and screw the thousand. Not so in voting—no matter what you think, other people won’t change. They don’t have anything conditioned on the outcome of your thought process, as in Newcomb’s. No, not even if “people thinking like you” are a bloc. You still can’t influence them. It’s a coordination game, not Newcomb’s.
Your reasoning resembles the “twins fallacy” in the Prisoner’s Dilemma: the idea that just by choosing to cooperate you can magically force your identical partner to do the same. Come to think of it, PD sounds like a better model for voting to me.
Update: Eliezer seems to think PD and Newcomb’s are related. Not sure why.
As far as I can tell, Newcomb problem exists only in English, and only because a completely aphysical causality loop is introduced. Every mathematization I’ve ever seen collapses it to either trivial one-boxing problem, or trivial two-boxing problem.
If anybody wants this problem to be treated seriously, maths first to show the problem is real! Otherwise, we’re really not much better than if we were discussing quotes from the Bible.
If you’ve seen formalizations, then it is formalizable. What are the formalizations?
Since I think the answer is obviously one-box, it doesn’t surprise me that there is a formalization in which that answer is obvious. I have never seen a formalization in which the answer is two-box. I have seen the argument that “causal decision theory” (?) chooses to two-box. People jump from that to the conclusion that the answer is two-box, but that is an idiotic conclusion. Given the premise, the correct conclusion is that this decision theory is inadequate. Anyhow, I don’t believe the argument. I interpret it simply as the decision theory failing to believe the statement of the problem. There is a disconnect between the words and the formalization of that decision theory.
The issue is not about formalizing Newcomb’s problem; the problem is creating a formal decision theory that can understand a class of scenarios including Newcomb’s problem. (It should be possible to tweak the usual decision theory to make it capable of believing Newcomb’s problem, but I don’t think that would be adequate for a larger class of problems.)
Please … Newcomb is a toy non-mathematizable problem and not a valid argument for anything at all. There must be a better example, or the entire problem is invalid.
I’ve long thought that voting in general is largely isomorphic to Newcomb’s. If you cop out and don’t vote, then everyone like you will reason the same way and not vote, and your favored candidates/policies will fail; but if you vote then the reverse might happen; and if you then carry it one more step… If you could just decide to one-box/vote then maybe everyone else like you will.
Sorry, in voting you don’t play the singular boss role that you play in Newcomb’s problem. But it’s amusing how far democracy proponents will go to convince themselves that their vote matters. :-)
I haven’t worked it out rigorously (else you would’ve seen a post on it by now!), but it seems to me in close elections (Florida 2000, say) that thought process could be valid. Considering how small the margins sometimes are, and how much of the electorate doesn’t vote, it doesn’t strike me as implausible that there are enough people thinking like me to make a difference.
And of course we could just specify as a condition that you and yours are a bloc powerful enough to affect the election. (Maybe you’re numerous, maybe there’re only a few electors, whatever.)
The problem with irrelevant ad hominems is that they’re very often based on flimsy evidence and so often wrong. I didn’t even vote last year because I figured my vote didn’t matter. I was not surprised.
In Newcomb’s problem you’re the boss, e.g. you can assign yourself a suitable utility function beforehand to keep the million and screw the thousand. Not so in voting—no matter what you think, other people won’t change. They don’t have anything conditioned on the outcome of your thought process, as in Newcomb’s. No, not even if “people thinking like you” are a bloc. You still can’t influence them. It’s a coordination game, not Newcomb’s.
Your reasoning resembles the “twins fallacy” in the Prisoner’s Dilemma: the idea that just by choosing to cooperate you can magically force your identical partner to do the same. Come to think of it, PD sounds like a better model for voting to me.
Update: Eliezer seems to think PD and Newcomb’s are related. Not sure why.
Why?
As far as I can tell, Newcomb problem exists only in English, and only because a completely aphysical causality loop is introduced. Every mathematization I’ve ever seen collapses it to either trivial one-boxing problem, or trivial two-boxing problem.
If anybody wants this problem to be treated seriously, maths first to show the problem is real! Otherwise, we’re really not much better than if we were discussing quotes from the Bible.
If you’ve seen formalizations, then it is formalizable. What are the formalizations?
Since I think the answer is obviously one-box, it doesn’t surprise me that there is a formalization in which that answer is obvious. I have never seen a formalization in which the answer is two-box. I have seen the argument that “causal decision theory” (?) chooses to two-box. People jump from that to the conclusion that the answer is two-box, but that is an idiotic conclusion. Given the premise, the correct conclusion is that this decision theory is inadequate. Anyhow, I don’t believe the argument. I interpret it simply as the decision theory failing to believe the statement of the problem. There is a disconnect between the words and the formalization of that decision theory.
The issue is not about formalizing Newcomb’s problem; the problem is creating a formal decision theory that can understand a class of scenarios including Newcomb’s problem. (It should be possible to tweak the usual decision theory to make it capable of believing Newcomb’s problem, but I don’t think that would be adequate for a larger class of problems.)