I fail to see why the Coin Flip Creation problems are at all interesting.
It is trivial to get suboptimal outcomes in favor of any target ‘optimal’ agent if the game can arbitrarily modify the submitted agent.
(Also, Coin Flip Creation Version 2, like the vanilla Newcomb’s paradox, requires that either a) the agent is sub-Turing (not capable of general computation) (in which case there is no paradox) or b) Omega has a Halting oracle, or is otherwise super-Turing, but this would require violating the Church-Turing thesis (in which case all bets are off).)
Well, the post did get agreement in the comment section, and had a quite clever sounding (but wrong) argument about how agents are deterministic in general etc., and it seemed important to point out the difference between CFC and Newcomb’s problem.
Why do others find Coin Flip Creation problems at all interesting? Is it because they a) have thought of said arguments and dismissed them (in which case, why? What am I missing?), b) because they haven’t thought of said arguments (in which case why not? I found it immediately apparent. Am I that much of an outlier?), or c) because of something else (if so, what?)
Ah, I get you now. I don’t know, of course; a and b could both be in the mix. I have had a similar feeling with an earlier piece on decision theory, which to me seemed (and still seems) so clearly wrong, and which got quite the upvotes. This isn’t meant to be too negative about that piece—it just seems people have very different intuitions about decision theory even after having thought (and read) about it quite a bit.
I fail to see why the Coin Flip Creation problems are at all interesting.
It is trivial to get suboptimal outcomes in favor of any target ‘optimal’ agent if the game can arbitrarily modify the submitted agent.
(Also, Coin Flip Creation Version 2, like the vanilla Newcomb’s paradox, requires that either a) the agent is sub-Turing (not capable of general computation) (in which case there is no paradox) or b) Omega has a Halting oracle, or is otherwise super-Turing, but this would require violating the Church-Turing thesis (in which case all bets are off).)
Well, the post did get agreement in the comment section, and had a quite clever sounding (but wrong) argument about how agents are deterministic in general etc., and it seemed important to point out the difference between CFC and Newcomb’s problem.
Perhaps I should rephrase:
Why do others find Coin Flip Creation problems at all interesting? Is it because they a) have thought of said arguments and dismissed them (in which case, why? What am I missing?), b) because they haven’t thought of said arguments (in which case why not? I found it immediately apparent. Am I that much of an outlier?), or c) because of something else (if so, what?)
Ah, I get you now. I don’t know, of course; a and b could both be in the mix. I have had a similar feeling with an earlier piece on decision theory, which to me seemed (and still seems) so clearly wrong, and which got quite the upvotes. This isn’t meant to be too negative about that piece—it just seems people have very different intuitions about decision theory even after having thought (and read) about it quite a bit.