Depends. Do you generally think that thought experiments involving fictional/nonexistent entities are irrelevant (to what?) and not worth thinking about? Or is there something special about Newcomb’s problem?
If the former, yes, I think you’re missing something. If the latter, then you might not be missing anything.
I think it’s only Newcomb’s problem in particular. I just can’t imagine how 1) knowing the right answer to this problem or 2) thinking about it can improve my life or that of any other person in any way.
I was reading quite recently, but I can’t remember where (LessWrong itself?) (ETA: yes, here and on So8res’ blog), someone saying Newcomb-like problems are the rule in social interactions. Every time you deal with someone who is trying to predict what you are going to do and might be better at it than you, you have a Newcomb-like problem. If you just make what seems to you like the obviously better decision, the other person may have anticipated that and made that choice appear deceptively better for you.
“Hey, check out this great offer I received! Of course, these things are scams, but I just can’t see how this one could be bad!”
“Dude, you’re wondering whether you should do exactly what a con artist has asked you to do?”
Now and then some less technically-minded friend will ask my opinion about a piece of dodgy email they received. My answer always begins, “IT’S A SCAM. IT’S ALWAYS A SCAM.”
Newcomb’s Problem reduces the situation to its bare essentials. A decision theory that two-boxes may not be much use for an AGI, or for a person.
(nods) And how would you characterize Newcomb’s problem?
For example, I would characterize it as raising questions about how to behave in situations where our own behaviors can reliably (though imperfectly) be predicted by another agent.
Depends. Do you generally think that thought experiments involving fictional/nonexistent entities are irrelevant (to what?) and not worth thinking about? Or is there something special about Newcomb’s problem?
If the former, yes, I think you’re missing something. If the latter, then you might not be missing anything.
Thanks for this answer.
I think it’s only Newcomb’s problem in particular. I just can’t imagine how 1) knowing the right answer to this problem or 2) thinking about it can improve my life or that of any other person in any way.
I was reading quite recently, but I can’t remember where (LessWrong itself?) (ETA: yes, here and on So8res’ blog), someone saying Newcomb-like problems are the rule in social interactions. Every time you deal with someone who is trying to predict what you are going to do and might be better at it than you, you have a Newcomb-like problem. If you just make what seems to you like the obviously better decision, the other person may have anticipated that and made that choice appear deceptively better for you.
“Hey, check out this great offer I received! Of course, these things are scams, but I just can’t see how this one could be bad!”
“Dude, you’re wondering whether you should do exactly what a con artist has asked you to do?”
Now and then some less technically-minded friend will ask my opinion about a piece of dodgy email they received. My answer always begins, “IT’S A SCAM. IT’S ALWAYS A SCAM.”
Newcomb’s Problem reduces the situation to its bare essentials. A decision theory that two-boxes may not be much use for an AGI, or for a person.
(nods)
And how would you characterize Newcomb’s problem?
For example, I would characterize it as raising questions about how to behave in situations where our own behaviors can reliably (though imperfectly) be predicted by another agent.