I’ve had similar concerns but haven’t gotten around to writing anything about it here—mostly because I figured it had already been addressed in hundreds of comments that have been posted on these topics, and I didn’t want to go through them all.
Here’s my basic thoughts on the subject. When we are constructing our decision theory, we need to make sure we setting ourselves up to optimize on the correct domain. If Omega exists and is actually a trickster, specifically good at fooling you into believing he’s not Omega, but is rather Epsilon, the younger, smaller, and more honest of the two brothers and with a taste for your money, then designing your decision theory to be susceptible to being counterfactually mugged is a bad idea. Though I suppose I’m splitting hairs, because how would be choose between decision theories? A meta-decision theory? I guess that brings us back to the simultaneously insightful and vague “rational agents win!”
I’ve had similar concerns but haven’t gotten around to writing anything about it here—mostly because I figured it had already been addressed in hundreds of comments that have been posted on these topics, and I didn’t want to go through them all.
Here’s my basic thoughts on the subject. When we are constructing our decision theory, we need to make sure we setting ourselves up to optimize on the correct domain. If Omega exists and is actually a trickster, specifically good at fooling you into believing he’s not Omega, but is rather Epsilon, the younger, smaller, and more honest of the two brothers and with a taste for your money, then designing your decision theory to be susceptible to being counterfactually mugged is a bad idea. Though I suppose I’m splitting hairs, because how would be choose between decision theories? A meta-decision theory? I guess that brings us back to the simultaneously insightful and vague “rational agents win!”