Eliezer, I don’t read the main thrust of your post as being about Newcomb’s problem per se.
Having distinguished between ‘rationality as means’ to whatever end you choose, and ‘rationality as a way of discriminating between ends’, can we agree that the whole specks / torture debate was something of a red herring ? Red herring, because it was a discussion on using rationality to discriminate between ends, without having first defined one’s meta-objectives, or, if one’s meta-objectives involved hedonism, establishing the rules for performing math over subjective experiences.
To illustrate the distinction using your other example, I could state that I prefer to save 400 lives certainly, simply because the purple fairy in my closet tells me to (my arbitrary preferred objective), and that would be perfectly legitimate. It would only be incoherent if I also declared it to be a strategy which would maximise the number of lives saved if a majority of people adopted it in similar circumstances (a different arbitrary preferred objective).
I could in fact have as preferred meta-objective for the universe that all the squilth in flobjuckstooge be globberised, and that would be perfectly legitimate. An FAI (or a BFG, for that matter (Roald Dahl, not Tom Hall)) could scan me and work towards creating the universe in which my proposition is meaningful, and make sure it happens. If now someone else’s preferred meta-objective for the universe is ensuring that the princess on page 3 gets a fairy cake, how is the FAI to prioritise ?
Eliezer, I don’t read the main thrust of your post as being about Newcomb’s problem per se. Having distinguished between ‘rationality as means’ to whatever end you choose, and ‘rationality as a way of discriminating between ends’, can we agree that the whole specks / torture debate was something of a red herring ? Red herring, because it was a discussion on using rationality to discriminate between ends, without having first defined one’s meta-objectives, or, if one’s meta-objectives involved hedonism, establishing the rules for performing math over subjective experiences. To illustrate the distinction using your other example, I could state that I prefer to save 400 lives certainly, simply because the purple fairy in my closet tells me to (my arbitrary preferred objective), and that would be perfectly legitimate. It would only be incoherent if I also declared it to be a strategy which would maximise the number of lives saved if a majority of people adopted it in similar circumstances (a different arbitrary preferred objective). I could in fact have as preferred meta-objective for the universe that all the squilth in flobjuckstooge be globberised, and that would be perfectly legitimate. An FAI (or a BFG, for that matter (Roald Dahl, not Tom Hall)) could scan me and work towards creating the universe in which my proposition is meaningful, and make sure it happens. If now someone else’s preferred meta-objective for the universe is ensuring that the princess on page 3 gets a fairy cake, how is the FAI to prioritise ?