I may bit a bit in over my head here, but I also don’t see a strong distinction between saying “Assume on Twin Earth that water is XYZ” and saying “Omega creates a world where...” Isn’t the point of a thought experiment to run with the hypothetical and trace out its implications? Yes, care must be taken not to over-infer from the result of that to a real system that may not match it, but how is this news? I seem to recall some folks (m’self included) finding that squicky with regard to “Torture vs Dust Specks”—if you stop resisting the question and just do the math the answer is obvious enough, but that doesn’t imply one believes that the scenario maps to a realizable condition.
I may just be confused here, but superficially it looks like a failure to apply the principle of “stop resisting the hypothetical” evenly.
I do worry that thought experiments involving Omega can lead decision theory research down wrong paths (for example by giving people misleading intuitions), and try to make sure the ones I create or pay attention to are not just arbitrary thought experiments but illuminate some aspect of real world decision making problems that we (or an AI) might face. Unfortunately, this relies largely on intuition and it’s hard to say what exactly is a valid or useful thought experiment and what isn’t, except maybe in retrospect.
I may bit a bit in over my head here, but I also don’t see a strong distinction between saying “Assume on Twin Earth that water is XYZ” and saying “Omega creates a world where...” Isn’t the point of a thought experiment to run with the hypothetical and trace out its implications? Yes, care must be taken not to over-infer from the result of that to a real system that may not match it, but how is this news? I seem to recall some folks (m’self included) finding that squicky with regard to “Torture vs Dust Specks”—if you stop resisting the question and just do the math the answer is obvious enough, but that doesn’t imply one believes that the scenario maps to a realizable condition.
I may just be confused here, but superficially it looks like a failure to apply the principle of “stop resisting the hypothetical” evenly.
I do worry that thought experiments involving Omega can lead decision theory research down wrong paths (for example by giving people misleading intuitions), and try to make sure the ones I create or pay attention to are not just arbitrary thought experiments but illuminate some aspect of real world decision making problems that we (or an AI) might face. Unfortunately, this relies largely on intuition and it’s hard to say what exactly is a valid or useful thought experiment and what isn’t, except maybe in retrospect.