I always enjoy convoluted Omega situations, but I don’t understand how these theoretical entities get to the point where their priors are as stated (and especially the meta-priors about how they should frame the decision problem).
Before the start of the game, Omega has some prior distribution of the Agent’s beliefs and update mechanisms. And the Agent has some distribution of beliefs about Omega’s predictive power over situations where the Agent “feels like” it has a choice. What experiences cause Omega to update sufficiently to even offer the problem (ok, this is easy: quantum brain scan or other Star-Trek technobabble)? But what lets the Agent update to believing that their qualia of free-will is such an illusion in this case? And how do they then NOT meta-update to understand the belief-action-payout matrix well enough to take the most-profitable action?
I guess I’ve discussed my perspective on the issue of unrealistic hypotheticals here, although you’ve already commented on that post. Beyond that, Scott Alexanders The Least Convenient Possible World is a great post, but I suspect you’ve seen it too.
BTW, I created a wiki page for hypotheticals. I’ve summarised some arguments on why we should pay attention to unrealistic hypothetical, but it’d be useful to have some opposing arguments listed there as well.
Useful pointers. I do remember those conversations, of course, and I think the objections (and valid uses) remain—one can learn from unlikely or impossible hypotheticals, but it takes extra steps to specify why some parts of it would be applicable to real situations. I also remember the decoupling vs contextualizing discussion, and hadn’t connected it to this topic—I’m going to have to think more before I really understand whether Newcomb-like problems have clear enough paths to applicability that they can be decoupled by default or whether there’s a default context I can just apply to make sense of them.
I always enjoy convoluted Omega situations, but I don’t understand how these theoretical entities get to the point where their priors are as stated (and especially the meta-priors about how they should frame the decision problem).
Before the start of the game, Omega has some prior distribution of the Agent’s beliefs and update mechanisms. And the Agent has some distribution of beliefs about Omega’s predictive power over situations where the Agent “feels like” it has a choice. What experiences cause Omega to update sufficiently to even offer the problem (ok, this is easy: quantum brain scan or other Star-Trek technobabble)? But what lets the Agent update to believing that their qualia of free-will is such an illusion in this case? And how do they then NOT meta-update to understand the belief-action-payout matrix well enough to take the most-profitable action?
I guess I’ve discussed my perspective on the issue of unrealistic hypotheticals here, although you’ve already commented on that post. Beyond that, Scott Alexanders The Least Convenient Possible World is a great post, but I suspect you’ve seen it too.
One additional thing which I can add is that this seems related to Decoupling vs. Contextualising norms.
BTW, I created a wiki page for hypotheticals. I’ve summarised some arguments on why we should pay attention to unrealistic hypothetical, but it’d be useful to have some opposing arguments listed there as well.
Useful pointers. I do remember those conversations, of course, and I think the objections (and valid uses) remain—one can learn from unlikely or impossible hypotheticals, but it takes extra steps to specify why some parts of it would be applicable to real situations. I also remember the decoupling vs contextualizing discussion, and hadn’t connected it to this topic—I’m going to have to think more before I really understand whether Newcomb-like problems have clear enough paths to applicability that they can be decoupled by default or whether there’s a default context I can just apply to make sense of them.