As far as the precognition study is concerned, the insanity hypothesis would not even rise to my attention were it not for reading this thread. Having considered the hypothesis for about a millisecond, I dismiss it out of hand. I do not take seriously the idea that I imagined the existence of the study or this thread discussing it.
As a general rule, I’m not much interested in such tiny effects as it reports, however astounding it would be if they were real, because the smaller they are, the easier it is for some tiny flaw in the experiment to have produced it by mundane means. The p values mean nothing more than “something is going on”, and excluding all mundane explanations will take a lot of work. I have not read the paper to see how much of this work they have done already.
What it would take to believe in Omega is irrelevant to Omega problems. The point of such a problem is to discuss it conditional on the assumptions that Omega exists as described and that the people in the story know this. Within the fiction, certain things are true and are believed to be true by everyone involved, and the question is not “how come?” but “what next?”
What it would take to believe in Omega is irrelevant to Omega problems.
But it is relevant to the argument about dismissing hypotheticals as only occurring when you’re insane or dreaming, and so trying to sanely work your way through such a problem has no use for you, because you’ll never apply any of this reasoning in any sane situation.
The main post made the point that
it seems that a decision theory that adjusts itself to give the right answer in insane situations is not going down the right track.
and this is the counterpoint: that there are ways of checking whether the situation’s insane, and ways of assuring yourself that the situation is almost certainly sane. Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn’t help when the author is saying that the “whole point of the Omega problems” is only going to be of use to you when you’re insane.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence. I always assumed that was one of the primary reasons for LW’s fascination with Omega problems.
Also, not all Omega problems are equal. As has been pointed out a bazillion of times, Newcomb’s Paradox works just as well if you only assume a good guesser with a consistent better-than-even track record (and indeed, IMO, should be phrased like this from the start, sacrificing simplicity for the sake of conceptual hygiene), so insanity considerations are irrelevant. On the flip-side, Counterfactual Mugging is “solved” by earning the best potential benefit right now as you answer the question, so the likelihood of Omega does have a certain role to play.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence
Being completely simulated by an external party is a realistic scenario for a human, given that an artificial intelligence exists. This might also be part of the fascination.
Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn’t help when the author is saying that the “whole point of the Omega problems” is only going to be of use to you when you’re insane.
I read Omega as being like the ideal frictionless pucks, point masses, and infinitely flexible and inextensible cords of textbook physics problems. To object that such materials do not exist, or to speculate about the advances in technology that must have happened to produce them is to miss the point. These thought experiments are works of fiction and need not have a past, except where it is relevant within the terms of the experiment.
Yes. But there are people on LW seriously claiming to assign a nonzero (or non-zero-plus-epsilon) probability of Omega—a philosophical abstraction for the purpose of thought experiments on edge cases—manifesting in the real world. At that point, they’ve arguably thought themselves less instrumentally functional.
there are people on LW seriously claiming to assign a nonzero probability of Omega—a philosophical abstraction for the purpose of thought experiments on edge cases—manifesting in the real world.
Being charitable, they assign a non-zero probability to the possibility of something coming into existence that is powerful enough to have the kinds of powers that our fictional Omega has. This is subtly different from believing the abstraction itself will come to life.
As far as the precognition study is concerned, the insanity hypothesis would not even rise to my attention were it not for reading this thread. Having considered the hypothesis for about a millisecond, I dismiss it out of hand. I do not take seriously the idea that I imagined the existence of the study or this thread discussing it.
As a general rule, I’m not much interested in such tiny effects as it reports, however astounding it would be if they were real, because the smaller they are, the easier it is for some tiny flaw in the experiment to have produced it by mundane means. The p values mean nothing more than “something is going on”, and excluding all mundane explanations will take a lot of work. I have not read the paper to see how much of this work they have done already.
What it would take to believe in Omega is irrelevant to Omega problems. The point of such a problem is to discuss it conditional on the assumptions that Omega exists as described and that the people in the story know this. Within the fiction, certain things are true and are believed to be true by everyone involved, and the question is not “how come?” but “what next?”
But it is relevant to the argument about dismissing hypotheticals as only occurring when you’re insane or dreaming, and so trying to sanely work your way through such a problem has no use for you, because you’ll never apply any of this reasoning in any sane situation.
The main post made the point that
and this is the counterpoint: that there are ways of checking whether the situation’s insane, and ways of assuring yourself that the situation is almost certainly sane. Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn’t help when the author is saying that the “whole point of the Omega problems” is only going to be of use to you when you’re insane.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence. I always assumed that was one of the primary reasons for LW’s fascination with Omega problems.
Also, not all Omega problems are equal. As has been pointed out a bazillion of times, Newcomb’s Paradox works just as well if you only assume a good guesser with a consistent better-than-even track record (and indeed, IMO, should be phrased like this from the start, sacrificing simplicity for the sake of conceptual hygiene), so insanity considerations are irrelevant. On the flip-side, Counterfactual Mugging is “solved” by earning the best potential benefit right now as you answer the question, so the likelihood of Omega does have a certain role to play.
Being completely simulated by an external party is a realistic scenario for a human, given that an artificial intelligence exists. This might also be part of the fascination.
And also conditional on the frequent LW belief that AIs become gods, but yeah.
I read Omega as being like the ideal frictionless pucks, point masses, and infinitely flexible and inextensible cords of textbook physics problems. To object that such materials do not exist, or to speculate about the advances in technology that must have happened to produce them is to miss the point. These thought experiments are works of fiction and need not have a past, except where it is relevant within the terms of the experiment.
Yes. But there are people on LW seriously claiming to assign a nonzero (or non-zero-plus-epsilon) probability of Omega—a philosophical abstraction for the purpose of thought experiments on edge cases—manifesting in the real world. At that point, they’ve arguably thought themselves less instrumentally functional.
Being charitable, they assign a non-zero probability to the possibility of something coming into existence that is powerful enough to have the kinds of powers that our fictional Omega has. This is subtly different from believing the abstraction itself will come to life.