I’ll have a think. An optimal decision maker for all scenarios seems impossible if your utility is reduced by an amount proportional to the time take to make the decision (“solving death” has this structure, less people die if you solve it earlier). The best in general I can think of is an infinite table mapping scenarios to a the decision computed by something like your UDT + oracle for that scenario. And this can be beaten in each individual scenario by a specialised algorithm for that scenario, that needs no look up.
And it still has an infinite quantity which I don’t like in my theories that I might want to connect to the real world one day (and requires an infinite amount of precomputation).
I wonder if there is a quality apart from strict optimality that we need to look for. Making the optimal decision in most problems( what is the correct weighting of scenarios)? Making the right decision eventually?
Anyway I’ll think some more. It is definitely thornier and nastier than “fair” problems.
I’ll have a think. An optimal decision maker for all scenarios seems impossible if your utility is reduced by an amount proportional to the time take to make the decision (“solving death” has this structure, less people die if you solve it earlier). The best in general I can think of is an infinite table mapping scenarios to a the decision computed by something like your UDT + oracle for that scenario. And this can be beaten in each individual scenario by a specialised algorithm for that scenario, that needs no look up.
And it still has an infinite quantity which I don’t like in my theories that I might want to connect to the real world one day (and requires an infinite amount of precomputation).
I wonder if there is a quality apart from strict optimality that we need to look for. Making the optimal decision in most problems( what is the correct weighting of scenarios)? Making the right decision eventually?
Anyway I’ll think some more. It is definitely thornier and nastier than “fair” problems.