When you go to infinity, you’d need to define additional mathematical structure that answers your question. You can’t just conclude that the correct course of action is to keep drawing cards for eternity, doing nothing else. Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.
For example, consider the following preference on infinite strings. A string has utility 0, unless it has the form 11111.....11112222...., that is a finite number of 1 followed by infinite number of 2, in which case its utility is the number of 1s. Clearly, a string of this form with one more 1 has higher utility than a string without, and so a string with one more 1 should be preferred. But a string consisting only of 1s doesn’t have the non-zero-utility form, because it doesn’t have the tail of infinite number of 2s. It’s a fallacy to follow an incremental argument to infinity. Instead, one must follow a one-step argument that considers the infinite objects as whole.
What you say sounds reasonable, but I’m not sure how I can apply it in this example. Can you elaborate?
Consider Eliezer’s choice of strategies at the beginning of the game. He can either stop after drawing n cards for some integer n, or draw an infinite number of cards. First, (supposing it takes 10 seconds to draw a card)
EU(draw an infinite number of cards)
= 1⁄2 U(live 10 seconds) + 1⁄4 U(live 20 seconds) + 1⁄8 U(live 30 seconds) …
which obviously converges to a small number. On the other hand, EU(stop after n+1 cards) > EU(stop after n cards) for all n. So what should he do?
This exposes a hole in the problem statement: what does the Omega’s prize measure? We determined that U0 is the counterfactual where Omega kills you, U1 is the counterfactual where it does nothing, but what is U2=U1+3*(U1-U0)? This seems to be the expected utility of the event where you draw the lucky card, in which case this event contains, in particular, your future decisions to continue drawing cards. But if it’s so, it places a limit on how your utility can be improved further during the latter rounds, since if your utility continues to increase, it contradicts the statement in the first round that your utility is going to be only U2, and no more. Utility can’t change, as each utility is a valuation of a specific event in the sample space.
So, the alternative formulation that removes this contradiction is for Omega to only assert that the expected utility given that you receive a lucky card is no less than U2. In this case the right strategy seems to be continue drawing cards indefinitely, since the utility you receive could be in something other than your own life, now spent drawing cards only.
This however seems to sidestep the issue. What if the only utility you see is in the future actions you do, which don’t include picking cards, and you can’t interleave cards with other actions, that is you must allot a given amount of time to picking cards.
You can recast the problem of choosing each of the infinite number of decisions (or one among all available in some sense infinite sequences of decisions) to the problem of choosing a finite “seed” strategy for making decisions. Say, only a finite number of strategies is available, for example only what fits in the memory of the computer that starts the enterprise, that could since the start of the experiment be expanded, but the first version has a specified limit. In this case, the right program is as close to Busy Beaver is you can get, that is you draw cards as long as possible, but only finitely long, and after that you stop and go on to enjoy the actual life.
Why are you treating time as infinite? Surely it’s finite, just taking unbounded values?
Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.
But you’re not asked to decide a strategy for all of time. You can change your decision at every round freely.
But you’re not asked to decide a strategy for all of time. You can change your decision at every round freely.
You can’t change any fixed thing, you can only determine it. Change is a timeful concept. Change appears when you compare now and tomorrow, not when you compare the same thing with itself. You can’t change the past, and you can’t change the future. What you can change about the future is your plan for the future, or your knowledge: as the time goes on, your idea about a fact in the now becomes a different idea tomorrow.
When you “change” your strategy, what you are really doing is changing your mind about what you’re planning. The question you are trying to answer is what to actually do, what decisions to implement at each point. A strategy for all time is a generator of decisions at each given moment, an algorithm that runs and outputs a stream of decisions. If you know something about each particular decision, you can make a general statement about the whole stream. If you know that each next decision is going to be “accept” as opposed to “decline”, you can prove that the resulting stream is equivalent to an infinite stream that only answers “accept”, at all steps. And at the end, you have a process, the consequences of your decision-making algorithm consist in all of the decisions. You can’t change that consequence, as the consequence is what actually happens, if you changed your mind about making a particular decision along the way, the effect of that change is already factored in in the resulting stream of actions.
The consequentialist preference is going to compare the effect of the whole infinite stream of potential decisions, and until you know about the finiteness of the future, the state space is going to contain elements corresponding to the infinite decision traces. In this state space, there is an infinite stream corresponding to one deciding to continue picking cards for eternity.
I’m more or less talking just about infinite streams, which is a well-known structure in math. You can try looking at the following references. Or find something else.
P. Cousot & R. Cousot (1992). `Inductive definitions, semantics and abstract interpretations’. In POPL ’92: Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 83-94, New York, NY, USA. ACM. http://www.di.ens.fr/~cousot/COUSOTpapers/POPL92.shtml
When you go to infinity, you’d need to define additional mathematical structure that answers your question. You can’t just conclude that the correct course of action is to keep drawing cards for eternity, doing nothing else. Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.
For example, consider the following preference on infinite strings. A string has utility 0, unless it has the form 11111.....11112222...., that is a finite number of 1 followed by infinite number of 2, in which case its utility is the number of 1s. Clearly, a string of this form with one more 1 has higher utility than a string without, and so a string with one more 1 should be preferred. But a string consisting only of 1s doesn’t have the non-zero-utility form, because it doesn’t have the tail of infinite number of 2s. It’s a fallacy to follow an incremental argument to infinity. Instead, one must follow a one-step argument that considers the infinite objects as whole.
See also Arntzenius, Elga, and Hawthorne: “Bayesianism, Infinite Decisions, and Binding”.
See also Arntzenius, Elga, and Hall: “Bayesianism, Infinite Decisions, and Binding”.
What you say sounds reasonable, but I’m not sure how I can apply it in this example. Can you elaborate?
Consider Eliezer’s choice of strategies at the beginning of the game. He can either stop after drawing n cards for some integer n, or draw an infinite number of cards. First, (supposing it takes 10 seconds to draw a card)
EU(draw an infinite number of cards) = 1⁄2 U(live 10 seconds) + 1⁄4 U(live 20 seconds) + 1⁄8 U(live 30 seconds) …
which obviously converges to a small number. On the other hand, EU(stop after n+1 cards) > EU(stop after n cards) for all n. So what should he do?
This exposes a hole in the problem statement: what does the Omega’s prize measure? We determined that U0 is the counterfactual where Omega kills you, U1 is the counterfactual where it does nothing, but what is U2=U1+3*(U1-U0)? This seems to be the expected utility of the event where you draw the lucky card, in which case this event contains, in particular, your future decisions to continue drawing cards. But if it’s so, it places a limit on how your utility can be improved further during the latter rounds, since if your utility continues to increase, it contradicts the statement in the first round that your utility is going to be only U2, and no more. Utility can’t change, as each utility is a valuation of a specific event in the sample space.
So, the alternative formulation that removes this contradiction is for Omega to only assert that the expected utility given that you receive a lucky card is no less than U2. In this case the right strategy seems to be continue drawing cards indefinitely, since the utility you receive could be in something other than your own life, now spent drawing cards only.
This however seems to sidestep the issue. What if the only utility you see is in the future actions you do, which don’t include picking cards, and you can’t interleave cards with other actions, that is you must allot a given amount of time to picking cards.
You can recast the problem of choosing each of the infinite number of decisions (or one among all available in some sense infinite sequences of decisions) to the problem of choosing a finite “seed” strategy for making decisions. Say, only a finite number of strategies is available, for example only what fits in the memory of the computer that starts the enterprise, that could since the start of the experiment be expanded, but the first version has a specified limit. In this case, the right program is as close to Busy Beaver is you can get, that is you draw cards as long as possible, but only finitely long, and after that you stop and go on to enjoy the actual life.
Why are you treating time as infinite? Surely it’s finite, just taking unbounded values?
But you’re not asked to decide a strategy for all of time. You can change your decision at every round freely.
You can’t change any fixed thing, you can only determine it. Change is a timeful concept. Change appears when you compare now and tomorrow, not when you compare the same thing with itself. You can’t change the past, and you can’t change the future. What you can change about the future is your plan for the future, or your knowledge: as the time goes on, your idea about a fact in the now becomes a different idea tomorrow.
When you “change” your strategy, what you are really doing is changing your mind about what you’re planning. The question you are trying to answer is what to actually do, what decisions to implement at each point. A strategy for all time is a generator of decisions at each given moment, an algorithm that runs and outputs a stream of decisions. If you know something about each particular decision, you can make a general statement about the whole stream. If you know that each next decision is going to be “accept” as opposed to “decline”, you can prove that the resulting stream is equivalent to an infinite stream that only answers “accept”, at all steps. And at the end, you have a process, the consequences of your decision-making algorithm consist in all of the decisions. You can’t change that consequence, as the consequence is what actually happens, if you changed your mind about making a particular decision along the way, the effect of that change is already factored in in the resulting stream of actions.
The consequentialist preference is going to compare the effect of the whole infinite stream of potential decisions, and until you know about the finiteness of the future, the state space is going to contain elements corresponding to the infinite decision traces. In this state space, there is an infinite stream corresponding to one deciding to continue picking cards for eternity.
Thanks, I understand now.
Whoa.
Is there something I can take that would help me understand that better?
I’m more or less talking just about infinite streams, which is a well-known structure in math. You can try looking at the following references. Or find something else.
P. Cousot & R. Cousot (1992). `Inductive definitions, semantics and abstract interpretations’. In POPL ’92: Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 83-94, New York, NY, USA. ACM. http://www.di.ens.fr/~cousot/COUSOTpapers/POPL92.shtml
J. J. M. M. Rutten (2003). `Behavioural differential equations: a coinductive calculus of streams, automata, and power series’. Theor. Comput. Sci. 308(1-3):1-53. http://www.cwi.nl/~janr/papers/files-of-papers/tcs308.pdf