I just re-read it more carefully, and I don’t see where it says that I can assume that Omega is telling the truth...
...but even if it did, my questions still stand, starting with how do I know that Omega is telling the truth? I cannot at present conceive* of any circumstances under which I would believe someone making the claims that Omega makes.
As I understand it, the point of the exercise is to show how our intuitive moral judgment leads us into inconsistencies or contradictions when dealing with complex mathematical situations (which is certainly true) -- so my point about context being important is still relevant. Give me sufficient moral context, and I’ll give you a moral determination that is consistent—but without that context, intuition is essentially dividing by zero to fill in the gaps.
without using my imagination to fill in some very large blanks, anyway, which means I could end up with a substantially different scenario from that intended
It’s a convention about Omega that Omega’s reliability is altogether beyond reproach. This is, of course, completely implausible, but it serves as a useful device to make sure that the only issues at hand are the offers Omega makes, not whether they can be expected to pan out.
Okay… this does render moot any conclusions one might draw from this exercise about the fallibility of human moral intuition.
Or was that not the point?
If the question is supposed to be considered in pure mathematical terms, then I don’t understand why I should care one way or the other; it’s like asking me if I like the number 3 better than the number 7.
The point is that Omega’s statements (about Omega itself, about the universe, etc.) are all to be taken at face value as premises in the thought experiments that feature Omega. From these premises, you attempt to derive conclusions. Entertaining variations on the thought experiment where any of the premises are in doubt is cheating (unless you can prove that they contradict one another, thereby invalidating the entire experiment).
Omega is a tool to find your true rejection, if you in fact reject something.
So what I’m supposed to do is make whatever assumptions are necessary to render the questions free of any side-effects, and then consider the question...
So, let me take a stab at answering the question, given my revised understanding.
If you pay me just one penny, I’ll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years. …with further shaving-off of survival odds in exchange for life-extension by truly Vast orders of magnitude.
First off, I can’t bring myself to care about the difference; both are incomprehensibly long amounts of time.
Also, my natural tendency is to avoid “deal sweeteners”, presumably because in the real world this would be the “switch” part of the “bait-and-switch”—but Omega is 100% trustworthy, so I don’t need to worry—which means I need to specifically override my natural “decision hysteresis” and consider this as an initial choice to be made.
Is it cheating to let the “real world” intrude in the form of the following thought?:
If, by the time 10^^3 years have elapsed, I or my civilization have not developed some more controllable means of might-as-well-be-immortality, then I’m probably not going to care too much how long I live past the end of my civilization, much less the end of the universe.
...or am I simply supposed to think of “years of life” as a commodity, like money? (The ensuing monetary analogies would seem to imply this...) Too much of anything, though—money or time—becomes meaningless when multiplied further.:
Time: Do I assume my friends get to come with me, and that together we will find some way to survive the inevitable maximization of entropy?
Money: After I’ve bought the earth, and the rights to the rest of the solar system and any other planets we’re able to find with the infinite improbability drive developed by the laboratories I paid for, what do we do with the other $0.99999 x 10^^whatever? (And how do I spend the first part of that money without causing a global economic crisis that will make this one look like a slow day at the taco stand? Oh, wait, though, I’m probably supposed to assume I earned it legitimately by contributing that much value to the global economy… how??? Mind boggles, scenario fails.)
In other words… Omega can have the penny, because it’s totally not about the penny, but I don’t see any point in starting down the road of shaving off probability-points in exchange for orders of magnitude, no matter how large.
In fact, I’d be more inclined to go the other way, if that were an option—reducing the likelihood of death in exchange for a shorter life. (I’m not quite clear on whether this could be reverse-extrapolated from the examples given.) I suspect a thousand years would be enough; give me that, and I can get the rest for myself. (Or am I supposed to assume that I will never be able to extend my life beyond the years Omega gives me? If so, we’re getting way too mystical and into premises that seem like they would force me to revise my understanding of reality in some significant way.)
So I guess my primary answer to Eliezer’s question is that I don’t even start down the garden path because I’m more inclined to walk the other way.
Please stop allowing your practical considerations get in the way of the pure, beautiful counterfactual!
Seriously though, either you allow yourself to suspend practicalities and consider pure decision theory, or you don’t. This is a pure maths problem, you can’t equate it to ‘John has 4 apples.’ John has 3^^^3 apples here, causing your mind to break. Forget the apples and years, consider utility!
As I said somewhere earlier (points vaguely upward), my impression was that this was not actually intended as a pure mathematical problem but rather an example of how our innate decisionmaking abilities (morality? intuition?) don’t do well with big numbers.
If this is not the case, then why phrase the question as a word problem with a moral decision to be made? Why not simply ask it in pure mathematical terms?
this was my initial reaction as well, ask if I can go the other way until we’re at, say, 1000 years. but if you truly take the problem at face value (we’re negotiating with omega, the whole point of omega is that he neatly lops off alternatives for the purposes of the thought experiment) and are negotiating for your total lifespan +- 0 then yes, I think you’d be forced to come up with a rule.
I think my “true rejection”, then, if I’m understanding the term correctly, is the idea that we live in a universe where such absolute certainties could exist—or at least where for-all-practical-purposes certainties can exist without any further context.
This problem seems to have an obvious “shut up and multiply” answer (take the deal), but our normal intuitions scream out against it. We can easily imagine some negligible chance of living through the next hour, but we just can’t imagine trusting some dude enough to take that chance, or (properly) a period longer than some large epoch time.
Since our inability to properly grok these elements of the problem is the fulcrum on which our difficulty balances it seems more reasonable than usual to question Omega & her claims.
(This problem seems as easy to me as specks vs torture: in both cases you need to shut up and multiply, and in both cases you need to quiet your screaming intuitions—they were trained against different patterns.)
I think this one of the biggest problems with these examples. It is theoretically impossible that (assuming your current life history has finite Kolmogorov complexity) you could hoard enough evidence to trust someone completely.
To me it seems like a fundamental (and mathematically quantifiable!) about these hypothetical situations: if a rational agent (one that uses Occam’s razor to model the reality) encounters a really complicated god-like being that does all kind of impossible looking things, then the agent would rather conclude that his brain is not working properly (or maybe that he is a Boltzmann brain) which would still be a simpler explanation than the assuming the reality of Omega.
I just re-read it more carefully, and I don’t see where it says that I can assume that Omega is telling the truth...
...but even if it did, my questions still stand, starting with how do I know that Omega is telling the truth? I cannot at present conceive* of any circumstances under which I would believe someone making the claims that Omega makes.
As I understand it, the point of the exercise is to show how our intuitive moral judgment leads us into inconsistencies or contradictions when dealing with complex mathematical situations (which is certainly true) -- so my point about context being important is still relevant. Give me sufficient moral context, and I’ll give you a moral determination that is consistent—but without that context, intuition is essentially dividing by zero to fill in the gaps.
without using my imagination to fill in some very large blanks, anyway, which means I could end up with a substantially different scenario from that intended
It’s a convention about Omega that Omega’s reliability is altogether beyond reproach. This is, of course, completely implausible, but it serves as a useful device to make sure that the only issues at hand are the offers Omega makes, not whether they can be expected to pan out.
Okay… this does render moot any conclusions one might draw from this exercise about the fallibility of human moral intuition.
Or was that not the point?
If the question is supposed to be considered in pure mathematical terms, then I don’t understand why I should care one way or the other; it’s like asking me if I like the number 3 better than the number 7.
The point is that Omega’s statements (about Omega itself, about the universe, etc.) are all to be taken at face value as premises in the thought experiments that feature Omega. From these premises, you attempt to derive conclusions. Entertaining variations on the thought experiment where any of the premises are in doubt is cheating (unless you can prove that they contradict one another, thereby invalidating the entire experiment).
Omega is a tool to find your true rejection, if you in fact reject something.
So what I’m supposed to do is make whatever assumptions are necessary to render the questions free of any side-effects, and then consider the question...
So, let me take a stab at answering the question, given my revised understanding.
If you pay me just one penny, I’ll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years. …with further shaving-off of survival odds in exchange for life-extension by truly Vast orders of magnitude.
First off, I can’t bring myself to care about the difference; both are incomprehensibly long amounts of time.
Also, my natural tendency is to avoid “deal sweeteners”, presumably because in the real world this would be the “switch” part of the “bait-and-switch”—but Omega is 100% trustworthy, so I don’t need to worry—which means I need to specifically override my natural “decision hysteresis” and consider this as an initial choice to be made.
Is it cheating to let the “real world” intrude in the form of the following thought?:
If, by the time 10^^3 years have elapsed, I or my civilization have not developed some more controllable means of might-as-well-be-immortality, then I’m probably not going to care too much how long I live past the end of my civilization, much less the end of the universe.
...or am I simply supposed to think of “years of life” as a commodity, like money? (The ensuing monetary analogies would seem to imply this...) Too much of anything, though—money or time—becomes meaningless when multiplied further.:
Time: Do I assume my friends get to come with me, and that together we will find some way to survive the inevitable maximization of entropy?
Money: After I’ve bought the earth, and the rights to the rest of the solar system and any other planets we’re able to find with the infinite improbability drive developed by the laboratories I paid for, what do we do with the other $0.99999 x 10^^whatever? (And how do I spend the first part of that money without causing a global economic crisis that will make this one look like a slow day at the taco stand? Oh, wait, though, I’m probably supposed to assume I earned it legitimately by contributing that much value to the global economy… how??? Mind boggles, scenario fails.)
In other words… Omega can have the penny, because it’s totally not about the penny, but I don’t see any point in starting down the road of shaving off probability-points in exchange for orders of magnitude, no matter how large.
In fact, I’d be more inclined to go the other way, if that were an option—reducing the likelihood of death in exchange for a shorter life. (I’m not quite clear on whether this could be reverse-extrapolated from the examples given.) I suspect a thousand years would be enough; give me that, and I can get the rest for myself. (Or am I supposed to assume that I will never be able to extend my life beyond the years Omega gives me? If so, we’re getting way too mystical and into premises that seem like they would force me to revise my understanding of reality in some significant way.)
So I guess my primary answer to Eliezer’s question is that I don’t even start down the garden path because I’m more inclined to walk the other way.
Am I still missing anything?
Please stop allowing your practical considerations get in the way of the pure, beautiful counterfactual!
Seriously though, either you allow yourself to suspend practicalities and consider pure decision theory, or you don’t. This is a pure maths problem, you can’t equate it to ‘John has 4 apples.’ John has 3^^^3 apples here, causing your mind to break. Forget the apples and years, consider utility!
As I said somewhere earlier (points vaguely upward), my impression was that this was not actually intended as a pure mathematical problem but rather an example of how our innate decisionmaking abilities (morality? intuition?) don’t do well with big numbers.
If this is not the case, then why phrase the question as a word problem with a moral decision to be made? Why not simply ask it in pure mathematical terms?
this was my initial reaction as well, ask if I can go the other way until we’re at, say, 1000 years. but if you truly take the problem at face value (we’re negotiating with omega, the whole point of omega is that he neatly lops off alternatives for the purposes of the thought experiment) and are negotiating for your total lifespan +- 0 then yes, I think you’d be forced to come up with a rule.
I think my “true rejection”, then, if I’m understanding the term correctly, is the idea that we live in a universe where such absolute certainties could exist—or at least where for-all-practical-purposes certainties can exist without any further context.
This problem seems to have an obvious “shut up and multiply” answer (take the deal), but our normal intuitions scream out against it. We can easily imagine some negligible chance of living through the next hour, but we just can’t imagine trusting some dude enough to take that chance, or (properly) a period longer than some large epoch time.
Since our inability to properly grok these elements of the problem is the fulcrum on which our difficulty balances it seems more reasonable than usual to question Omega & her claims.
(This problem seems as easy to me as specks vs torture: in both cases you need to shut up and multiply, and in both cases you need to quiet your screaming intuitions—they were trained against different patterns.)
I think this one of the biggest problems with these examples. It is theoretically impossible that (assuming your current life history has finite Kolmogorov complexity) you could hoard enough evidence to trust someone completely.
To me it seems like a fundamental (and mathematically quantifiable!) about these hypothetical situations: if a rational agent (one that uses Occam’s razor to model the reality) encounters a really complicated god-like being that does all kind of impossible looking things, then the agent would rather conclude that his brain is not working properly (or maybe that he is a Boltzmann brain) which would still be a simpler explanation than the assuming the reality of Omega.