My book discusses a similar scenario: the dual-simulation version of Newcomb’s Problem (section 6.3), in the case where the large box is empty (no $1M) and (I argue) it’s still rational to forfeit the $1K. Nesov’s version nicely streamlines the scenario.
Just to elaborate a bit, Nesov’s scenario and mine share the following features:
In both cases, we argue that an agent should forfeit a smaller sum for the sake of a larger reward that would have been obtainted (couterfactually contingently on that forfeiture) if a random event had turned out differently than in fact it did (and than the agent knows it did).
We both argue for using the original coin-flip probability distribution (i.e., not-updating, if I’ve understood that idea correctly) for purposes of this decision, and indeed in general, even in mundane scenarios.
We both note that the forfeiture decision is easier to justify if the coin-toss was quantum under MWI, because then the original probability distribution corresponds to a real physical distribution of amplitude in configuration-space.
Nesov’s scenario improves on mine in several ways. He eliminates some unnecessary complications (he uses one simulation instead of two, and just tells the agent what the coin-toss was, whereas my scenario requires the agent to deduce that). So he makes the point more clearly, succinctly and dramatically. Even more importantly, his analysis (along with Yudkowsky, Dai, and others here) is more formal than my ad hoc argument (if you’ve looked at Good and Real, you can tell that formalism is not my forte.:)).
I too have been striving for a more formal foundation, but it’s been elusive. So I’m quite pleased and encouraged to find a community here that’s making good progress focusing on a similar set of problems from a compatible vantage point.
I’m quite pleased and encouraged to find a community here that’s making good progress focusing on a similar set of problems from a compatible vantage point.
And I think I speak for everyone when I say we’re glad you’ve started posting here! Your book was suggested as required rationalist reading. It certainly opened my eyes, and I was planning to write a review and summary so people could more quickly understand its insights.
(And not to be a suck-up, but I was actually at a group meeting the other day where the ice-breaker question was, “If you could spend a day with any living person, who would it be?” I said Gary Drescher. Sadly, no one had heard the name.)
I won’t be able to contribute much to these discussions for a while, unfortunately. I don’t have a firm enough grasp of Pearlean causality and need to read up more on that and Newcomb-like problems (halfway through your book’s handling of it).
My book discusses a similar scenario: the dual-simulation version of Newcomb’s Problem (section 6.3), in the case where the large box is empty (no $1M) and (I argue) it’s still rational to forfeit the $1K. Nesov’s version nicely streamlines the scenario.
Just to elaborate a bit, Nesov’s scenario and mine share the following features:
In both cases, we argue that an agent should forfeit a smaller sum for the sake of a larger reward that would have been obtainted (couterfactually contingently on that forfeiture) if a random event had turned out differently than in fact it did (and than the agent knows it did).
We both argue for using the original coin-flip probability distribution (i.e., not-updating, if I’ve understood that idea correctly) for purposes of this decision, and indeed in general, even in mundane scenarios.
We both note that the forfeiture decision is easier to justify if the coin-toss was quantum under MWI, because then the original probability distribution corresponds to a real physical distribution of amplitude in configuration-space.
Nesov’s scenario improves on mine in several ways. He eliminates some unnecessary complications (he uses one simulation instead of two, and just tells the agent what the coin-toss was, whereas my scenario requires the agent to deduce that). So he makes the point more clearly, succinctly and dramatically. Even more importantly, his analysis (along with Yudkowsky, Dai, and others here) is more formal than my ad hoc argument (if you’ve looked at Good and Real, you can tell that formalism is not my forte.:)).
I too have been striving for a more formal foundation, but it’s been elusive. So I’m quite pleased and encouraged to find a community here that’s making good progress focusing on a similar set of problems from a compatible vantage point.
And I think I speak for everyone when I say we’re glad you’ve started posting here! Your book was suggested as required rationalist reading. It certainly opened my eyes, and I was planning to write a review and summary so people could more quickly understand its insights.
(And not to be a suck-up, but I was actually at a group meeting the other day where the ice-breaker question was, “If you could spend a day with any living person, who would it be?” I said Gary Drescher. Sadly, no one had heard the name.)
I won’t be able to contribute much to these discussions for a while, unfortunately. I don’t have a firm enough grasp of Pearlean causality and need to read up more on that and Newcomb-like problems (halfway through your book’s handling of it).
I think you’d find me anticlimactic. :) But I do appreciate the kind words.