I’ve never met an infinite decision tree in my life so far, and I doubt I ever will. It is a property of problems with an infinite solution space that they can’t be solved optimally, and it doesn’t reveal any decision theoretic inconsistencies that could come up in real life.
Consider this game with a tree structure: You pick an arbitrary natural number, and then, your opponent does as well. The player who chose the highest number wins. Clearly, you cannot win this game, as no matter which number you pick, the opponent can simply add one to that number. This also works with picking a positive rational number that’s closest to 1 - your opponent here adds one to the denominator and the numerator, and wins.
The idea to use a busy beaver function is good, and if you can utilize the entire universe to encode the states of the busy beaver with the largest number of states possible (and a long enough tape), then that constitutes the optimal solution, but that only takes us further out into the realm of fiction.
I’ve never met an infinite decision tree in my life so far, and I doubt I ever will. It is a property of problems with an infinite solution space that they can’t be solved optimally, and it doesn’t reveal any decision theoretic inconsistencies that could come up in real life.
“You are finite. Zathras is finite. This utility function has infinities in it. No, not good. Never use that.” — Not Babylon 5
But I do not choose my utility function as an means to get something. My utility function describes is what I want to choose means to get. And I’m pretty sure it’s unbounded.
You’ve only expended a finite amount of computation on the question, though; and you’re running on corrupted hardware. How confident can you be that you have already correctly distinguished an unbounded utility function from one with a very large finite bound?
(A genocidal, fanatical asshole once said: “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”)
The tax man’s dilemma, an infinite decision tree grounded in reality:
Assume you’re the anthropomorphization of government. And you have a decision to make: You need to decide the ideal tax rate for businesses.
In your society, corporations reliably make 5% returns on investments, accounting for inflation. That money is reliably reinvested, although not necessarily in the same corporation.
How should you tax those returns in order to maximize total utility? You may change taxes at any point. Also, you’re the anthropomorphic representation of government—you are, for all intents and purposes, immortal.
Assume a future utility discount rate of less than the investment return rate, and assume you don’t know the money-utility relationship—you can say you weigh the possibility of future disasters which require immense funds against the possibility that money has declining utility over time to produce a constant relationship for simplicity, if you wish. Assume that your returns will be less than corporate returns, and corporate utility will be less than your utility. (Simplified, you produce no investment returns, corporations produce no utility.)
“Suppose , what does that say about your solutions designed for real life?” and screw you I hate when people do this and think it is clever. Utility monster is another example of this sort of nonsense.
but you said the same thing, and less rudely, so upvoted.
If it’s impossible to win, because your opponent always picks second, then every choice is optimal.
If you pick simultaneously, picking the highest number you can describe is optimal, so that’s another situation where there is no optimal solution for an infinite mind, but for a finite mind, there is an optimal solution.
I’ve never met an infinite decision tree in my life so far, and I doubt I ever will. It is a property of problems with an infinite solution space that they can’t be solved optimally, and it doesn’t reveal any decision theoretic inconsistencies that could come up in real life.
Consider this game with a tree structure: You pick an arbitrary natural number, and then, your opponent does as well. The player who chose the highest number wins. Clearly, you cannot win this game, as no matter which number you pick, the opponent can simply add one to that number. This also works with picking a positive rational number that’s closest to 1 - your opponent here adds one to the denominator and the numerator, and wins.
The idea to use a busy beaver function is good, and if you can utilize the entire universe to encode the states of the busy beaver with the largest number of states possible (and a long enough tape), then that constitutes the optimal solution, but that only takes us further out into the realm of fiction.
“You are finite. Zathras is finite. This utility function has infinities in it. No, not good. Never use that.”
— Not Babylon 5
But I do not choose my utility function as an means to get something. My utility function describes is what I want to choose means to get. And I’m pretty sure it’s unbounded.
You’ve only expended a finite amount of computation on the question, though; and you’re running on corrupted hardware. How confident can you be that you have already correctly distinguished an unbounded utility function from one with a very large finite bound?
(A genocidal, fanatical asshole once said: “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”)
I do think it possible I may be mistaken.
The tax man’s dilemma, an infinite decision tree grounded in reality:
Assume you’re the anthropomorphization of government. And you have a decision to make: You need to decide the ideal tax rate for businesses.
In your society, corporations reliably make 5% returns on investments, accounting for inflation. That money is reliably reinvested, although not necessarily in the same corporation.
How should you tax those returns in order to maximize total utility? You may change taxes at any point. Also, you’re the anthropomorphic representation of government—you are, for all intents and purposes, immortal.
Assume a future utility discount rate of less than the investment return rate, and assume you don’t know the money-utility relationship—you can say you weigh the possibility of future disasters which require immense funds against the possibility that money has declining utility over time to produce a constant relationship for simplicity, if you wish. Assume that your returns will be less than corporate returns, and corporate utility will be less than your utility. (Simplified, you produce no investment returns, corporations produce no utility.)
I never saw an infinite decision tree
I doubt I’ll ever see one
But if it weren’t imaginary
I’d rather like to play one
Hmm I guess I see now why so few classic poets used decision theory as their inspiration.
I like this. I was going to say something like,
but you said the same thing, and less rudely, so upvoted.
If it’s impossible to win, because your opponent always picks second, then every choice is optimal.
If you pick simultaneously, picking the highest number you can describe is optimal, so that’s another situation where there is no optimal solution for an infinite mind, but for a finite mind, there is an optimal solution.