But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
That one thing a couple years ago qualifies.
But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
(Warning: this may be basilisk territory.)