Suppose Omega or one of its ilk says to you, “Here’s a game we can play. I have an
infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I’ll double your utility for the rest of your life. If you draw a card with a skull, I’ll kill you.”
How many cards do you draw?
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I’m uncomfortable about playing the lottery with the universe if it’s the only one we’ve got.
If a rational, ethical one-worlds believer doesn’t continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.
(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn’t make you less careless with any of them. But that’s just my gut instinct.)
Also, I also think that you can’t, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.
Not only is “What do we believe?” a theoretically distinct question from “What do I do about it?”, but by your logic we should also refuse to believe in spatially infinite universes and inflationary universes, since they also have lots of copies of us.
Not only is “What do we believe?” a theoretically distinct question from “What do I do about it?”
“What do we believe?” is a distinct question; and asking it is comitting an error of rationality. The limitations of our minds often force us to use “belief” as a heuristic; but we should remember that it is fundamentally an error, particularly when the consequences are large.
You don’t do the expected-cost analysis when investigating a theory; you should do it before dismissing a theory. Because, If someday you build an AI, and hardcode in the many-worlds assumption because many years before you dismissed the one-world hypothesis from your mind and have not considered it since, you will be committing a grave Bayesian error, with possibly disastrous consequences.
(My cost-of-error statements above are for you specifically. Most people aren’t planning to build a singleton.)
I can’t speak for Eliezer, but if I was building a singleton I probably wouldn’t hard-code my own particular scientific beliefs into it, and even if I did I certainly wouldn’t program any theory at 100% confidence.
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I’m uncomfortable about playing the lottery with the universe if it’s the only one we’ve got.
Omega clearly has more than one universe up his sleeve. It doesn’t take too many doublings of my utility function before a further double would require more entropy than is contained in this one. Just how many galaxies worth of matter perfectly optimised for my benefit do I really need?
The problem here is that is hard to imagine Omega actually being able to double utility. Doubling utility is hard. It really would be worth the risk of gambling indefinitely if Omega actually had the power to do what he promised. If it isn’t, then you by definition have your utility function wrong. In fact, if exactly half of the cards killed you and the other half doubled utility it would still be worth gambling unless you assign exactly 0 utility to anything else in the universe in the case of your death.
Omega knows you’ll draw a skull before you get that many doublings.
That would be a different problem. Either the participant is informed that the probability distribution in question has anthropic bias based on the gamemaster’s limits or the gamemaster is not Omega-like.
I think that someone who believes in many-worlds will keep drawing cards until they die.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality … but assume Omega can do this.
The payoff, then, looks like it favors going to an arbitrarily high number given that quantum immortality is true. Honestly, my gut response is that I would go to either 3 draws, 9 draws, or 13 draws depending on how risk-averse I felt and how much utility I expected as my baseline (a twice-as-high utility before doubling lets me go one doubling less).
I think this says that my understanding of utility falls prey to diminishing returns when it shouldn’t (partially a problem with utility itself), and that I don’t really believe in quantum immortality—because I am choosing a response that is optimal for a non-quantum immortality scenario.
But in any reasonable situation where I encounter this scenario, my response is accurate: it takes into account my uncertainty about immortality (requires a few more things than just the MWI) and also accounts for me updating my beliefs about quantum immortality based on evidence from the bet. That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal. Utility is so complicated, and doubling just gets insane so quickly.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality … but assume Omega can do this.
Neither the problem itself nor this response need make any mention of quantum immortality. Given an understanding of many-worlds ‘belief in quantum immortality’ is just a statement about preferences given a certain type of scenario. There isn’t some kind of special phenomenon involved, just a matter of choosing what sort of preferences you have over future branches.
That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal.
No, no, no! Apart from being completely capricious with essentially arbitrary motivations they aren’t betting against quantum immortality. They are betting a chance of killing someone against a chance of making ridiculous changes to the universe. QI just doesn’t play a part in their payoffs at all.
I haven’t seen you take into account the relative costs of error of the two beliefs.
A few months ago, I asked:
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I’m uncomfortable about playing the lottery with the universe if it’s the only one we’ve got.
If a rational, ethical one-worlds believer doesn’t continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.
(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn’t make you less careless with any of them. But that’s just my gut instinct.)
Also, I also think that you can’t, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.
Not only is “What do we believe?” a theoretically distinct question from “What do I do about it?”, but by your logic we should also refuse to believe in spatially infinite universes and inflationary universes, since they also have lots of copies of us.
“What do we believe?” is a distinct question; and asking it is comitting an error of rationality. The limitations of our minds often force us to use “belief” as a heuristic; but we should remember that it is fundamentally an error, particularly when the consequences are large.
You don’t do the expected-cost analysis when investigating a theory; you should do it before dismissing a theory. Because, If someday you build an AI, and hardcode in the many-worlds assumption because many years before you dismissed the one-world hypothesis from your mind and have not considered it since, you will be committing a grave Bayesian error, with possibly disastrous consequences.
(My cost-of-error statements above are for you specifically. Most people aren’t planning to build a singleton.)
I can’t speak for Eliezer, but if I was building a singleton I probably wouldn’t hard-code my own particular scientific beliefs into it, and even if I did I certainly wouldn’t program any theory at 100% confidence.
Omega clearly has more than one universe up his sleeve. It doesn’t take too many doublings of my utility function before a further double would require more entropy than is contained in this one. Just how many galaxies worth of matter perfectly optimised for my benefit do I really need?
The problem here is that is hard to imagine Omega actually being able to double utility. Doubling utility is hard. It really would be worth the risk of gambling indefinitely if Omega actually had the power to do what he promised. If it isn’t, then you by definition have your utility function wrong. In fact, if exactly half of the cards killed you and the other half doubled utility it would still be worth gambling unless you assign exactly 0 utility to anything else in the universe in the case of your death.
Omega knows you’ll draw a skull before you get that many doublings.
That would be a different problem. Either the participant is informed that the probability distribution in question has anthropic bias based on the gamemaster’s limits or the gamemaster is not Omega-like.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality … but assume Omega can do this.
The payoff, then, looks like it favors going to an arbitrarily high number given that quantum immortality is true. Honestly, my gut response is that I would go to either 3 draws, 9 draws, or 13 draws depending on how risk-averse I felt and how much utility I expected as my baseline (a twice-as-high utility before doubling lets me go one doubling less).
I think this says that my understanding of utility falls prey to diminishing returns when it shouldn’t (partially a problem with utility itself), and that I don’t really believe in quantum immortality—because I am choosing a response that is optimal for a non-quantum immortality scenario.
But in any reasonable situation where I encounter this scenario, my response is accurate: it takes into account my uncertainty about immortality (requires a few more things than just the MWI) and also accounts for me updating my beliefs about quantum immortality based on evidence from the bet. That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal. Utility is so complicated, and doubling just gets insane so quickly.
Neither the problem itself nor this response need make any mention of quantum immortality. Given an understanding of many-worlds ‘belief in quantum immortality’ is just a statement about preferences given a certain type of scenario. There isn’t some kind of special phenomenon involved, just a matter of choosing what sort of preferences you have over future branches.
No, no, no! Apart from being completely capricious with essentially arbitrary motivations they aren’t betting against quantum immortality. They are betting a chance of killing someone against a chance of making ridiculous changes to the universe. QI just doesn’t play a part in their payoffs at all.