Adding to your story, it’s not just Eliezer Yudkowsky’s introduction to Newcomb’s problem. It’s the entire Bayesian / Less Wrong mindset. Here, Eliezer wrote:
That was when I discovered that I was of the type called ‘Bayesian’. As far as I can tell, I was born that way.
I felt something similar when I was reading through the sequences. Everything “clicked” for me—it just made sense. I couldn’t imagine thinking another way.
Same with Newcomb’s problem. I wasn’t introduced to it by Eliezer, but I still thought one-boxing was obvious; it works.
Many Less Wrongers that have stuck around probably have had a similar experience; the Bayesian standpoint seems intuitive. Eliezer’s support certainly helps to propagate one-boxing, but LessWrongers seem to be a self-selecting group.
It also helps that most Bayesian decision algorithms actually take on the arg max_a U(a)*P(a) reasoning of Evidential Decision Theory, which means that whenever you invoke your self-image as a capital-B Bayesian you are semi-consciously invoking Evidential Decision Theory, which does actually get the right answer, even if it messes up on other problems.
(Commenting because I got here while looking for citations for my WIP post about another way to handle Newcomb-like problems.)
Adding to your story, it’s not just Eliezer Yudkowsky’s introduction to Newcomb’s problem. It’s the entire Bayesian / Less Wrong mindset. Here, Eliezer wrote:
I felt something similar when I was reading through the sequences. Everything “clicked” for me—it just made sense. I couldn’t imagine thinking another way.
Same with Newcomb’s problem. I wasn’t introduced to it by Eliezer, but I still thought one-boxing was obvious; it works.
Many Less Wrongers that have stuck around probably have had a similar experience; the Bayesian standpoint seems intuitive. Eliezer’s support certainly helps to propagate one-boxing, but LessWrongers seem to be a self-selecting group.
It also helps that most Bayesian decision algorithms actually take on the
arg max_a U(a)*P(a)
reasoning of Evidential Decision Theory, which means that whenever you invoke your self-image as a capital-B Bayesian you are semi-consciously invoking Evidential Decision Theory, which does actually get the right answer, even if it messes up on other problems.(Commenting because I got here while looking for citations for my WIP post about another way to handle Newcomb-like problems.)