That problem is why you have to be careful with the insanity hypothesis: there IS a possibility that you are insane, but it doesn’t have all the same evidence that Omega existing does. Which is where a lot of people in this thread are coming from: sanity tests, dreaming tests, and so on will give you evidence about your sanity without having to involve Omega; so that you can through repeated and varied tests reduce the probability of your insanity below Omega existing, and then start considering the Omega possibility.
In other words, experiencing Omega existing is strong evidence for your insanity, and strong evidence for Omega existing. Since your prior for insanity is higher than your prior for Omega existing, you ought to conclude that you need to test your insanity hypothesis until you have enough strong evidence against it to update your priors so that Omega is more likely than insanity.
As far as the precognition study is concerned, the insanity hypothesis would not even rise to my attention were it not for reading this thread. Having considered the hypothesis for about a millisecond, I dismiss it out of hand. I do not take seriously the idea that I imagined the existence of the study or this thread discussing it.
As a general rule, I’m not much interested in such tiny effects as it reports, however astounding it would be if they were real, because the smaller they are, the easier it is for some tiny flaw in the experiment to have produced it by mundane means. The p values mean nothing more than “something is going on”, and excluding all mundane explanations will take a lot of work. I have not read the paper to see how much of this work they have done already.
What it would take to believe in Omega is irrelevant to Omega problems. The point of such a problem is to discuss it conditional on the assumptions that Omega exists as described and that the people in the story know this. Within the fiction, certain things are true and are believed to be true by everyone involved, and the question is not “how come?” but “what next?”
What it would take to believe in Omega is irrelevant to Omega problems.
But it is relevant to the argument about dismissing hypotheticals as only occurring when you’re insane or dreaming, and so trying to sanely work your way through such a problem has no use for you, because you’ll never apply any of this reasoning in any sane situation.
The main post made the point that
it seems that a decision theory that adjusts itself to give the right answer in insane situations is not going down the right track.
and this is the counterpoint: that there are ways of checking whether the situation’s insane, and ways of assuring yourself that the situation is almost certainly sane. Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn’t help when the author is saying that the “whole point of the Omega problems” is only going to be of use to you when you’re insane.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence. I always assumed that was one of the primary reasons for LW’s fascination with Omega problems.
Also, not all Omega problems are equal. As has been pointed out a bazillion of times, Newcomb’s Paradox works just as well if you only assume a good guesser with a consistent better-than-even track record (and indeed, IMO, should be phrased like this from the start, sacrificing simplicity for the sake of conceptual hygiene), so insanity considerations are irrelevant. On the flip-side, Counterfactual Mugging is “solved” by earning the best potential benefit right now as you answer the question, so the likelihood of Omega does have a certain role to play.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence
Being completely simulated by an external party is a realistic scenario for a human, given that an artificial intelligence exists. This might also be part of the fascination.
Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn’t help when the author is saying that the “whole point of the Omega problems” is only going to be of use to you when you’re insane.
I read Omega as being like the ideal frictionless pucks, point masses, and infinitely flexible and inextensible cords of textbook physics problems. To object that such materials do not exist, or to speculate about the advances in technology that must have happened to produce them is to miss the point. These thought experiments are works of fiction and need not have a past, except where it is relevant within the terms of the experiment.
Yes. But there are people on LW seriously claiming to assign a nonzero (or non-zero-plus-epsilon) probability of Omega—a philosophical abstraction for the purpose of thought experiments on edge cases—manifesting in the real world. At that point, they’ve arguably thought themselves less instrumentally functional.
there are people on LW seriously claiming to assign a nonzero probability of Omega—a philosophical abstraction for the purpose of thought experiments on edge cases—manifesting in the real world.
Being charitable, they assign a non-zero probability to the possibility of something coming into existence that is powerful enough to have the kinds of powers that our fictional Omega has. This is subtly different from believing the abstraction itself will come to life.
The problem is that Omega is by definition a hypothetical construct for purposes of exploring philosophical edge cases. We’re not talking about any reasonable real-world phenomenon.
Let’s taboo Omega. What are you actually describing happening here? What is the entity that can do these things, in your experience? I don’t believe your first thought would be “gosh, it turns out the philosophical construct Omega is real.” You’d think of the entity as a human. What characteristics would you ascribe to this person?
e.g. A rich and powerful man called O’Mega (of, say, Buffett or Soros levels of wealth and fame—you know this guy is real, very smart and ridiculously more successful than you) shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. O’Mega shows you that he has put $1,000 in box B. O’Mega says that if he thinks you will take box A only, he will have put $1,000,000 in it (he does not show you). Otherwise he will have left it empty. O’Mega says he has played this game many times, and says he has never been wrong in his predictions about whether someone will take both boxes or not.
Would the most useful response in this real-world situation be: 1. Take both boxes. 2. Take box A. 3. walk away, not taking any money, because you don’t understand his wacky game and want no part of it, before going home to hit the Internet and tell Reddit and LessWrong about it. 4. Invent timeless decision theory?
Would the most useful response in this real-world situation be: 1. Take both boxes. 2. Take box A. 3. walk away, not taking any money, because you don’t understand his wacky game and want no part of it, before going home to hit the Internet and tell Reddit and LessWrong about it. 4. Invent timeless decision theory?
I think the first thing I would do is ask what he’ll give me if he is wrong. ;-)
(Rationale: my first expectation is that Mr. O’Mega likes to mess with people’s minds, and sees it as worth risking $1000 just to watch people take an empty box and get pissed at him, or to crow in delight when you take both boxes and feel like you lost a million that you never would’ve had in the first place. So, absent any independent way to determine the odds that he’s playing the game sincerely, I at least want a consolation prize for the “he’s just messing with me” case.)
Well, the situation that the post was discussing is not Newcomb’s problem, it’s counterfactual mugging, so how about:
The CEO of your employer Omegacorp schedules a meeting with you. When you enter, there is a coin showing tails in front of him. He tells you that as you were outside his office, he flipped this coin. If it came up heads, he would triple your salary, but if it came up tails he would ask you to take a 10% cut in your pay. Now, he says, arbitrary changes in your pay, even for the better, can’t happen without your approval. Given that you desire money, you clearly want to refuse the pay cut. He notes that you would have accepted either outcome if you’d been given a choice to take the bet before the coin flipped, and that the only reason he didn’t let you in the office to watch the flip (or pre-commit to agreeing) was that company regulations prohibit gambling with employees salaries.
In this case you ought to consider the possibility you are dreaming or insane. Response 3 makes a lot of sense here, even down to the Reddit and LessWrong parts.
That problem is why you have to be careful with the insanity hypothesis: there IS a possibility that you are insane, but it doesn’t have all the same evidence that Omega existing does. Which is where a lot of people in this thread are coming from: sanity tests, dreaming tests, and so on will give you evidence about your sanity without having to involve Omega; so that you can through repeated and varied tests reduce the probability of your insanity below Omega existing, and then start considering the Omega possibility.
In other words, experiencing Omega existing is strong evidence for your insanity, and strong evidence for Omega existing. Since your prior for insanity is higher than your prior for Omega existing, you ought to conclude that you need to test your insanity hypothesis until you have enough strong evidence against it to update your priors so that Omega is more likely than insanity.
As far as the precognition study is concerned, the insanity hypothesis would not even rise to my attention were it not for reading this thread. Having considered the hypothesis for about a millisecond, I dismiss it out of hand. I do not take seriously the idea that I imagined the existence of the study or this thread discussing it.
As a general rule, I’m not much interested in such tiny effects as it reports, however astounding it would be if they were real, because the smaller they are, the easier it is for some tiny flaw in the experiment to have produced it by mundane means. The p values mean nothing more than “something is going on”, and excluding all mundane explanations will take a lot of work. I have not read the paper to see how much of this work they have done already.
What it would take to believe in Omega is irrelevant to Omega problems. The point of such a problem is to discuss it conditional on the assumptions that Omega exists as described and that the people in the story know this. Within the fiction, certain things are true and are believed to be true by everyone involved, and the question is not “how come?” but “what next?”
But it is relevant to the argument about dismissing hypotheticals as only occurring when you’re insane or dreaming, and so trying to sanely work your way through such a problem has no use for you, because you’ll never apply any of this reasoning in any sane situation.
The main post made the point that
and this is the counterpoint: that there are ways of checking whether the situation’s insane, and ways of assuring yourself that the situation is almost certainly sane. Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn’t help when the author is saying that the “whole point of the Omega problems” is only going to be of use to you when you’re insane.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence. I always assumed that was one of the primary reasons for LW’s fascination with Omega problems.
Also, not all Omega problems are equal. As has been pointed out a bazillion of times, Newcomb’s Paradox works just as well if you only assume a good guesser with a consistent better-than-even track record (and indeed, IMO, should be phrased like this from the start, sacrificing simplicity for the sake of conceptual hygiene), so insanity considerations are irrelevant. On the flip-side, Counterfactual Mugging is “solved” by earning the best potential benefit right now as you answer the question, so the likelihood of Omega does have a certain role to play.
Being completely simulated by an external party is a realistic scenario for a human, given that an artificial intelligence exists. This might also be part of the fascination.
And also conditional on the frequent LW belief that AIs become gods, but yeah.
I read Omega as being like the ideal frictionless pucks, point masses, and infinitely flexible and inextensible cords of textbook physics problems. To object that such materials do not exist, or to speculate about the advances in technology that must have happened to produce them is to miss the point. These thought experiments are works of fiction and need not have a past, except where it is relevant within the terms of the experiment.
Yes. But there are people on LW seriously claiming to assign a nonzero (or non-zero-plus-epsilon) probability of Omega—a philosophical abstraction for the purpose of thought experiments on edge cases—manifesting in the real world. At that point, they’ve arguably thought themselves less instrumentally functional.
Being charitable, they assign a non-zero probability to the possibility of something coming into existence that is powerful enough to have the kinds of powers that our fictional Omega has. This is subtly different from believing the abstraction itself will come to life.
The problem is that Omega is by definition a hypothetical construct for purposes of exploring philosophical edge cases. We’re not talking about any reasonable real-world phenomenon.
Let’s taboo Omega. What are you actually describing happening here? What is the entity that can do these things, in your experience? I don’t believe your first thought would be “gosh, it turns out the philosophical construct Omega is real.” You’d think of the entity as a human. What characteristics would you ascribe to this person?
e.g. A rich and powerful man called O’Mega (of, say, Buffett or Soros levels of wealth and fame—you know this guy is real, very smart and ridiculously more successful than you) shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. O’Mega shows you that he has put $1,000 in box B. O’Mega says that if he thinks you will take box A only, he will have put $1,000,000 in it (he does not show you). Otherwise he will have left it empty. O’Mega says he has played this game many times, and says he has never been wrong in his predictions about whether someone will take both boxes or not.
Would the most useful response in this real-world situation be: 1. Take both boxes. 2. Take box A. 3. walk away, not taking any money, because you don’t understand his wacky game and want no part of it, before going home to hit the Internet and tell Reddit and LessWrong about it. 4. Invent timeless decision theory?
I think the first thing I would do is ask what he’ll give me if he is wrong. ;-)
(Rationale: my first expectation is that Mr. O’Mega likes to mess with people’s minds, and sees it as worth risking $1000 just to watch people take an empty box and get pissed at him, or to crow in delight when you take both boxes and feel like you lost a million that you never would’ve had in the first place. So, absent any independent way to determine the odds that he’s playing the game sincerely, I at least want a consolation prize for the “he’s just messing with me” case.)
Well, the situation that the post was discussing is not Newcomb’s problem, it’s counterfactual mugging, so how about:
The CEO of your employer Omegacorp schedules a meeting with you. When you enter, there is a coin showing tails in front of him. He tells you that as you were outside his office, he flipped this coin. If it came up heads, he would triple your salary, but if it came up tails he would ask you to take a 10% cut in your pay. Now, he says, arbitrary changes in your pay, even for the better, can’t happen without your approval. Given that you desire money, you clearly want to refuse the pay cut. He notes that you would have accepted either outcome if you’d been given a choice to take the bet before the coin flipped, and that the only reason he didn’t let you in the office to watch the flip (or pre-commit to agreeing) was that company regulations prohibit gambling with employees salaries.
In this case you ought to consider the possibility you are dreaming or insane. Response 3 makes a lot of sense here, even down to the Reddit and LessWrong parts.
Before that I’d consider the possibility that the CEO is not entirely sound.
Has anyone actually attempted a counterfactual mugging at a LessWrong meetup?